
Search by job, company or skills
7+ years of experience in data engg with Proficient knowledge of Spark + scala.
Responsible for building scalable data pipelines, transforming large datasets, and optimizing performance using Apache Spark with Python.
Strong Scala programming skills: Proficiency in Scala, including functional programming concepts and object-oriented design.
Key Responsibilities
• Develop scalable data pipelines using PySpark and Spark SQL.
• Proficiency in Scala.
• Optimize Spark jobs for performance and resource efficiency.
• Ensure data quality, integrity, and security across workflows.
• Work with Hadoop ecosystem and cloud platforms (AWS/Azure/GCP).
• Preferred: Experience with Airflow, Kafka, and CI/CD tools.
Location : Bangalore
*If very strong tech background we can request customer for PAN India
Master in Computer Application (M.C.A), Post Graduate Diploma in Computer Applications (PGDCA), Bachelor Of Computer Application (B.C.A)
Job ID: 147155253