Industry: Staffing & recruitment services supporting technology, finance and enterprise clients across India. We engage with product-led and data-driven organisations to build scalable data platforms and operational analytics capabilities.
Primary job title: Data Engineer Location: India (On-site).
Role & Responsibilities
- Design, implement and maintain scalable ETL/ELT pipelines for batch and streaming data to feed analytics and ML systems.
- Develop and optimise data transformation jobs using Python and Apache Spark; tune SQL queries and data models for performance.
- Author and manage workflow orchestration using Apache Airflow, including scheduling, dependency management and SLA monitoring.
- Build and operate cloud data warehouse solutions (Snowflake/Redshift/BigQuery), ensuring data quality, schema evolution and cost-efficiency.
- Collaborate with Data Science, BI and Product teams to translate business requirements into reliable data products and support production rollouts and incident resolution.
- Implement CI/CD, automated testing, observability and documentation for data pipelines; enforce engineering best practices and data governance standards.
Skills & Qualifications Must-Have
- Python
- SQL
- Apache Spark
- Apache Airflow
- AWS
- Snowflake
Preferred
Benefits & Culture Highlights
- Opportunity to work on end-to-end data platforms for fast-growing, data-first clients.
- Hands-on role with ownership of production pipelines, observability and performance optimisation.
- On-site collaboration with cross-functional teams strong exposure to analytics and ML use-cases.
We are seeking a results-driven Data Engineer based in India who thrives on building reliable data infrastructure and delivering measurable business impact. If you have a track record of shipping production-grade pipelines and enjoy working on high-throughput data systems, this role offers technical ownership, growth and exposure to diverse industry datasets.
Skills: snowflake,aws,sql,apache spark,etl,python,data modeling