Role: Data Engineer - Python, PySpark, Airflow
Experience: 3+ Years
Location: Bangalore | Hyderabad
Immediate Joiners Only
Key Responsibilities
- Develop and enhance new product features with a strong focus on scalability and performance.
- Modernize existing system components by redesigning them to align with new architecture paradigms.
- Deeply understand the business domain, customer needs, and core use cases.
- Own and deliver complex engineering tasks end-to-end.
- Ensure adherence to non-functional requirements (stability, scalability, performance).
- Mentor junior engineers and influence technical decision-making.
- Work within a Scrum team and collaborate with cross-functional groups to solve challenging technical problems.
Requirements
- Minimum 5 - 7 years of hands-on experience with Python/Spark/Airflow.
- Strong SQL skills with experience working on relational and NoSQL databases (Snowflake is a plus).
- Strong understanding of data models, ETL/ELT pipelines, and batch/real-time processing.
- Practical experience in Scala or Java.
- Experience with Docker, Kubernetes, and containerized environments.
- Solid experience building solutions on cloud platforms.
- Strong understanding of DevOps methodologies.
- Experience working in an Agile/Scrum environment.
- Passion for modern data engineering technologies and continuous learning.
- Ability to quickly dive into development activities and deliver high-quality code.
- English proficiency: B2 or above.