Job Summary:
We are looking for an experienced AWS Data Engineer to design and maintain scalable data pipelines and cloud-based data platforms on AWS. The ideal candidate must have strong expertise in data engineering, ETL, and cloud architecture.
Key Responsibilities:
- Build and optimize data ingestion, transformation & storage pipelines using AWS Glue, Lambda, EMR, Redshift, S3
- Develop and manage ETL/ELT workflows for analytics & reporting
- Design and maintain data lakes and data warehouses
- Ensure data quality, validation, governance & security
- Optimize performance and cost across compute & storage
- Implement IaC using Terraform / CloudFormation
- Support CI/CD, DevOps, and automated pipeline deployments
Required Skills and Qualifications:
- 58 years in Data Engineering / ETL / Cloud Data Solutions
- Strong hands-on experience with AWS:
- Glue, S3, Lambda, Redshift, EMR, Athena, Kinesis, Step Functions
- Expertise in SQL & Python
- Strong knowledge of data modeling (OLTP/OLAP), warehousing & performance tuning
- Experience with ETL tools: Glue, Talend, Informatica, dbt
- Familiarity with Spark / Hadoop / PySpark
- Knowledge of Git, CI/CD & AWS security best practices
Soft Skills:
- Strong analytical & problem-solving skills
- Excellent communication & teamwork
- Ability to work independently in agile environments
- Detail-oriented with focus on data quality
Preferred Qualifications :
- Experience with Kafka, Kinesis, MSK
- Exposure to Airflow, Snowflake, Databricks, dbt
- Understanding of MLOps / Data Science pipelines
- AWS certifications (Data Analytics / Solutions Architect)