About The Role
We are looking for a Data Engineer to build, extend, and operate cloud-native data pipelines as part of a large-scale Azure data platform. This role covers end-to-end pipeline development - from source system ingestion through Bronze–Silver–Gold transformation layers — along with configuration-based framework extensions, evaluation engine integration, and data quality gate management across multiple data domains.
Key Responsibilities
- Design and build end-to-end data ingestion pipelines for multiple enterprise source systems using Fabric Dataflow Gen2 and Event Hub connectors
- Implement Bronze–Silver–Gold Delta Table data layering following established schema and field mapping conventions
- Write and maintain SQL evaluation logic in Synapse Analytics for automated control assessment across multiple data domains
- Perform data quality validation, schema reconciliation, and lineage verification across all pipeline stages
- Support parallel-run validation activities ensuring data consistency between automated outputs and source reports
Required Skills & Experience
- 3+ years of experience in data engineering, ETL/ELT development, and cloud data pipeline delivery
- Strong hands-on experience with Microsoft Fabric (Dataflow Gen2, Lakehouse, Delta Tables) and Azure Synapse Analytics
- Proficiency in SQL for complex data transformation, evaluation logic, and cross-domain queries
- Experience building event-driven pipelines using Azure Event Hub and Event Grid
- Familiarity with config-driven or metadata-driven data pipeline frameworks
- Experience with Microsoft Purview for data lineage registration and catalog management is advantageous
- Ability to work in sprint-based delivery environments with multiple concurrent workstreams
- Strong attention to detail and ability to validate complex data flows end-to-end across multiple systems