Job Description
Responsibilities
Lead the design and implementation of scalable data architectures (data lakes, warehouses, streaming systems).
Build and optimize ETL/ELT pipelines for diverse data sources, ensuring high performance and reliability.
Drive data governance, security, and compliance initiatives across all data platforms.
Mentor junior engineers and provide technical guidance to cross-functional teams.
Collaborate with stakeholders to translate business requirements into technical solutions.
Implement automation and monitoring frameworks to ensure operational excellence.
Evaluate and adopt emerging AWS services and modern data engineering tools to enhance
capabilities.
Mandatory Skills
5+ years of professional experience in data engineering, with at least 4+ years working on AWS.
Deep expertise in AWS services: S3, Glue, Redshift, Athena, EMR, Kinesis, DynamoDB, Lambda, Step Functions.
Strong proficiency in SQL, Python, and Spark for data processing and pipeline development.
Proven experience with workflow orchestration tools (Airflow, Dagster, Step Functions).
Solid understanding of data modeling, partitioning strategies, and performance tuning.
Hands-on experience with CI/CD pipelines, Git, and Infrastructure-as-Code (Terraform/CloudFormation).
Familiarity with containerization (Docker, Kubernetes) and microservices-based architectures.