About The Role
We are looking for a Data Engineer to build, extend, and operate cloud-native data pipelines as part of a large-scale Azure data platform. This role covers end-to-end pipeline development - from source system ingestion through Bronze–Silver–Gold transformation layers — along with configuration-based framework extensions, evaluation engine integration, and data quality gate management across multiple data domains.
Key Responsibilities
- Design and build end-to-end data ingestion pipelines for multiple enterprise source systems using Fabric Dataflow Gen2 and Event Hub connectors
- Implement Bronze–Silver–Gold Delta Table data layering following established schema and field mapping conventions
- Write and maintain SQL evaluation logic in Synapse Analytics for automated control assessment across multiple data domains
- Perform data quality validation, schema reconciliation, and lineage verification across all pipeline stages
- Support parallel-run validation activities ensuring data consistency between automated outputs and source reports