
Search by job, company or skills
We are seeking a highly skilled and motivated Data Engineer to join our data team. The ideal
candidate will have a strong foundation in cloud data architectures, specifically within cloud
environments (Azure preferably) or any equivalent (GCP/AWS)**. You will play a pivotal role in
designing, building, and maintaining scalable data pipelines, ensuring data integrity, and
facilitating seamless data migration from various legacy sources to modern cloud platforms.
Your primary focus will be on SQL proficiency and data integration using tools like Azure Data
Factory (ADF), Google Cloud Dataflow, or Cloud Composer, supported by a solid
understanding of data modeling principles.
Key Responsibilities
Mandatory Qualifications
ETL/ELT Pipeline Development: Design, develop, and maintain robust data pipelines
using Azure Data Factory (ADF), Google Cloud Dataflow, or Cloud Composer (Airflow)
to ingest, transform, and load data from diverse sources (on-premise databases, APIs, flat
files) into our data warehouse/data lake.
Data Migration: Lead and execute complex data migration projects, moving data from
legacy systems to cloud-based solutions while ensuring zero data loss and minimal
downtime.
Data Modeling: Design and implement efficient data models (Star schema, Snowflake
schema, Data Vault) to support business intelligence and analytics requirements.
SQL Optimization: Write complex, highly optimized SQL queries for data transformation,
validation, and analysis. Performance tune existing queries within Azure PostgreSQL or
equivalent.
Cloud Infrastructure: Architect and manage data storage solutions using Azure and GCP
services (Azure PostgreSql, Azure Data Lake Gen2).
Scripting & Automation: Utilize Python (or other scripting languages) for data
manipulation, automation of routine tasks, and bridging gaps in standard ETL tool
capabilities.
Data Quality & Governance: Implement data quality checks and validation processes to
ensure accuracy and consistency across the data lifecycle.
Collaboration: Work closely with Data Scientists, Analysts, and business stakeholders to
understand data requirements and translate them into technical solutions. Also familiarity to
use AI-assisted tooling for day-to-day work and automations.
Strong SQL Skills: Expert-level proficiency in SQL is non-negotiable. Ability to write
complex queries, analyze query performance, and manage database schemas
Preferred Qualifications (Nice to Have)
(PostgreSql).
Cloud Data Integration: Proven experience with Azure Data Factory (ADF) OR Google
Cloud Dataflow/Composer, and equivalent enterprise ETL tools (e.g., Informatica Cloud,
Matillion, Talend) in a cloud environment.
Cloud Familiarity: Hands-on experience with cloud platforms (Azure and/or GCP) and
understanding of cloud storage, compute, and networking concepts.
Data Migration Experience: Demonstrated history of successfully migrating data from
heterogeneous data sources (e.g., SQL Server, Oracle, PostgreSQL, On-prem files) to the
cloud.
Communication: Excellent verbal and written communication skills. Ability to explain
complex technical concepts to non-technical stakeholders clearly.
Scripting: Proficiency in Python (Pandas, PySpark, Apache Beam) for data processing.
CI/CD: Familiarity with DevOps practices for data pipelines (Azure DevOps, Cloud Build,
GitHub Actions).
Certifications: Any relevant Data Engineering certification, such as Microsoft Certified:
Azure Data Engineer Associate or Google Cloud Professional Data Engineer.
Job ID: 137797979