Role Summary
We are seeking an experienced Cloud Data Engineer with strong expertise in Airflow, SQL, Snowflake, Python, and AWS. The ideal candidate will have a solid background in designing and building scalable data pipelines, cloud-based ETL solutions, and modern data warehouse platforms. This role requires strong technical skills, a problem ‑ solving mindset, and the ability to work in an Agile environment.
Responsibilities
- Design, develop, and maintain data pipelines and workflow orchestration using Apache Airflow.
- Build scalable and reliable ETL/ELT data pipelines using Python, SQL, and AWS data services.
- Develop and optimize Snowflake schemas, queries, stored procedures, and data models.
- Implement best practices in data ingestion, data transformation, and data quality validation.
- Collaborate with cross-functional teams to gather requirements and implement end-to-end data engineering solutions.
- Perform performance tuning, query optimization, and troubleshoot complex data issues.
- Work across the Software Development Life Cycle (SDLC) using Agile/Scrum methodologies.
- Ensure data security, governance, and adherence to architectural standards.
Qualifications: 6+ years of professional experience in data engineering, big data, or data warehouse development. Strong hands-on experience with: - Apache Airflow (DAG design, scheduling, monitoring).
- Python (data processing, automation, scripting).
- Advanced SQL (complex queries, optimization, stored procedures).
- Snowflake (data modeling, performance tuning, Snowpipe, tasks, streams).
- AWS services such as S3, Glue, Lambda, Athena, RDS, Redshift, IAM, etc.
- Experience with Linux environments and shell scripting (Bash/PowerShell).
- Strong understanding of cloud-based ETL/ELT technologies.
- Experience working with JSON, streaming data, or Kafka/MSK.
- Excellent knowledge of RDBMS concepts and database objects.
Travel
- This position is not remote.
- Travel is not required.