Job Description
We have an urgent opening for a
Python resource . Please find below JD for the role, location would be
Noida.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL workflows using AWS, Python and SQL.
- Develop and manage cloud-native data solutions using AWS services such as S3, Lambda, Glue, SNS, SQS, and CloudWatch.
- Implement efficient data ingestion, transformation, and loading processes across multiple data sources.
- Monitor, troubleshoot, and optimize data pipelines and cloud infrastructure for performance and reliability.
- Implement job scheduling and orchestration using enterprise scheduling tools such as Tidal or TWS.
- Build and maintain CI/CD pipelines using Jenkins for automated deployment and delivery.
- Work closely with business stakeholders, data analysts, and architects to understand requirements and deliver data-driven solutions.
- Ensure adherence to data governance, security standards, and best engineering practices.
- Participate in design reviews, code reviews, and technical documentation.
Required Skills
- 3+ years of experience in Data Engineering, Software Engineering, or related fields.
- Strong programming experience in Python.
- Advanced knowledge of SQL and relational databases.
- Hands-on experience with AWS services including S3, Lambda, Glue, SNS, SQS, and CloudWatch.
- Experience building data pipelines and data processing frameworks.
- Experience with job scheduling tools such as Tidal or TWS.
- Experience with CI/CD tools like Jenkins.
- Strong analytical and problem-solving skills.