
Search by job, company or skills
Design and implement scalable data solutions within the Microsoft Fabric ecosystem, leveraging Lakehouses, Data Factory Pipelines, Dataflows (Gen2), and Spark Notebooks.
Develop high-performance data pipelines using PySpark and SQL to handle large-scale data processing and complex transformations.
Apply Medallion Architecture principles (Bronze, Silver, and Gold layers) to ensure data quality, lineage, and reliability across the enterprise.
Build robust analytical models using Star and Snowflake schemas, ensuring efficient use of Fact/Dimension tables, surrogate keys, and comprehensive source-to-target mapping.
Architect SAP data extraction workflows and implement Change Data Capture (CDC) mechanisms to maintain real-time or near-real-time data synchronization.
Provision and manage Azure Virtual Machines, including full setup and configuration to support data workloads.
Maintain a solid grasp of Azure Networking fundamentals, including VNets, Private Endpoints, and Synapse integration to ensure secure data transit.
Lead the validation of ETL/ELT lifecycles, focusing on ingestion accuracy, transformation logic, and final data reconciliation.
Execute rigorous SQL-based data validation during SIT (System Integration Testing) and UAT (User Acceptance Testing) phases.
Operate within Git-based CI/CD workflows to automate deployments and maintain version control across all data artifacts.
Azure fundamentals VNet, Private Endpoints, Synaps
Job ID: 144152059