Role: Spark Scala Developer
Experience: 3-8 years
Location: Bengaluru
Employment Type: Full-time
What We're Looking For
We're hiring a Spark Scala Developer who has real-world experience working in Big Data environments, both on-prem and/or in the cloud. You should know how to write production-grade Spark applications, fine-tune performance, and work fluently with Scala's functional style. Experience with cloud platforms and modern data tools like Snowflake or Databricks is a strong plus.
Your Responsibilities
- Design and develop scalable data pipelines using Apache Spark and Scala
- Optimize and troubleshoot Spark jobs for performance (e.g. memory management, shuffles, skew)
- Work with massive datasets in on-prem Hadoop clusters or cloud platforms like AWS/GCP/Azure
- Write clean, modular Scala code using functional programming principles
- Collaborate with data teams to integrate with platforms like Snowflake, Databricks, or data lakes
- Ensure code quality, documentation, and CI/CD practices are followed
Requirements
Must-Have Skills
- 3+ years of experience with Apache Spark in Scala
- Deep understanding of Spark internalsDAG, stages, tasks, caching, joins, partitioning
- Hands-on experience with performance tuning in production Spark jobs
- Proficiency in Scala functional programming (e.g. immutability, higher-order functions, Option/Either)
- Proficiency in SQL
- Experience with any major cloud platform: AWS, Azure, or GCP
Nice-to-Have
- Worked with Databricks, Snowflake, or Delta Lake
- Exposure to data pipeline tools like Airflow, Kafka, Glue, or BigQuery
- Familiarity with CI/CD pipelines and Git-based workflows
- Comfortable with SQL optimization and schema design in distributed environments
Benefits
Work with one of the Big 4's in India
Healthy work Environment
Work Life Balance