We're looking foraData Engineerto design, build, and scale modern data platforms onAWS. You'll work withPython, Spark, DBT, and AWS-native servicesin an Agile environment to deliver scalable, secure, and high-performance data solutions.
What you'll do
- Develop and optimizeETL/ELT pipelineswith Python, DBT, and AWS services (Data Ops Live).
- Build and manageS3-based data lakesusing modern data formats (Parquet, ORC, Iceberg).
- Deliver end-to-end data solutions withGlue, EMR, Lambda, Redshift, and Athena.
- Implement strongmetadata, governance, and securityusing Glue Data Catalog, Lake Formation, IAM, and KMS.
- Orchestrate workflows withAirflow, Step Functions, or AWS-native tools.
- Ensurereliability and automationwith CloudWatch, CloudTrail, CodePipeline, and Terraform.
- Collaborate with analysts and data scientists to deliverbusiness insightsin an Agile setting.
Required Skills & Experience
- 47 years of experience indata engineering, with 3+ years on AWS platforms
- Strong inPython (incl. AWS SDKs), DBT, SQL, and Spark
- Proven expertise withAWS data stack(S3, Glue, EMR, Redshift, Athena, Lambda)
- Hands-on experience withworkflow orchestration(Airflow/Step Functions)
- Familiarity withdata lake formats(Parquet, ORC, Iceberg) andDevOps practices(Terraform, CI/CD)
- Solid understanding ofdata governance & securitybest practices
Bonus
- Exposure to Data Mesh principles and platforms like Data.World
- Familiarity withHadoop/HDFS in hybrid or legacy environments
Kindly Note this is Hybrid Work Arrangement with 3 Days a week in office ( Bangalore / HSR Layout) .