Job Title: Data Engineer
Location: Gurugram, Delhi NCR (Onsite)
Project Type: Permanent
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Scala and Spark
- Process and analyze large volumes of structured and unstructured data
- Build robust ETL/ELT workflows for data ingestion and transformation
- Optimize Spark jobs for performance and scalability
- Work with cross-functional teams including Data Scientists, Analysts, and Product teams
- Ensure data quality, reliability, and integrity across systems
- Implement data storage solutions using data lakes/warehouses
- Troubleshoot and resolve data-related issues
Required Skills & Experience
- Strong hands-on experience with Scala and Apache Spark
- Solid understanding of distributed computing concepts
- Experience with ETL/ELT design patterns
- Proficiency in SQL and data modeling
- Experience with big data technologies (Hadoop ecosystem is a plus)
- Familiarity with cloud platforms (AWS / Azure / GCP)
- Experience with data lakes / data warehousing solutions
- Knowledge of version control (Git) and CI/CD practices