
Search by job, company or skills
This is a Full Time WORK FROM HOME Position for a US Based Client. Working time will be 2 PM IST till 11 PM IST (Which is coincide with USA Eastern Time Zone).
Min Experience Required :7 Years in Azure Platform, Microsoft Fabric, Direct Lake architecture
Role Overview
We are seeking a highly skilled Senior Azure Data Engineer with deep expertise in Microsoft Fabric, Direct Lake architecture, and the modern Azure data platform. In this role, you will design, build, and operationalize end-to-end data solutions that power enterprise-grade analytics and business intelligence. You will be a key contributor to our client's data transformation journey, owning the engineering of Fabric Lakehouses, Delta/Parquet pipelines, and Direct Lake-enabled Power BI semantic models.
This is a senior-level, hands-on role requiring both architectural depth and execution capability. You will collaborate closely with the Lead BI/Data Architect, BI Developers, and business stakeholders to deliver scalable, performant, and governed data products.
Key Responsibilities
Direct Lake Implementation (Primary Focus)
· Architect and implement Direct Lake semantic models in Microsoft Fabric using Spark notebooks and Data Factory pipelines.
· Create and maintain Delta tables in Fabric Lakehouse (OneLake) optimized for Direct Lake query performance.
· Build Fabric notebooks (PySpark / Scala) to automate Lakehouse table creation, schema evolution, and partition management.
· Design and execute Data Factory pipelines within Fabric to orchestrate data ingestion into Delta/Parquet format at scale.
· Configure Direct Lake datasets in Power BI, ensuring framing, fallback behavior, and model refresh strategies are correctly implemented.
· Perform Delta table optimization tasks including V-Order writes, Z-Order clustering, OPTIMIZE, VACUUM, and file compaction for Direct Lake readiness.
· Troubleshoot Direct Lake fallback to DirectQuery and implement fixes to maximize in-memory performance.
· Implement incremental load patterns (CDC, watermark, merge/upsert) in Delta tables to support real-time and near-real-time reporting.
Azure & Fabric Data Engineering
· Design and build data pipelines using Azure Data Factory (ADF), Fabric Data Factory, and Apache Spark on Fabric.
· Develop and manage Bronze / Silver / Gold Medallion architecture within Microsoft Fabric Lakehouse.
Job ID: 147464425
Skills:
Devops, Adf, Pyspark, Databricks, Azure, Python, P G frameworks like AI factory, Ultimate Pygentic, ADLS2
Skills:
Azure Data Factory, Azure Functions, Azure Synapse Analytics, Data Warehousing, Azure Logic Apps, Dimensional Modeling, ETL ELT processes, SQL Server query optimization, Azure Data Lake Storage ADLS Gen2, RedPoint or similar customer data platforms, big data concepts
Skills:
Data Modeling, Sql, Databricks, ELT, Data Factory, Apache Spark, ARM templates, PowerShell, Automation Tools, Etl, Star Schema, Python, Version Control, Azure Data Lake, Application Insights, Azure Monitor, Snowflake Schema, DevOps methods, Azure Migrate, Log Analytics, Azure CLI, ASR, Azure Database Migration Service, Synapse Analytics
Skills:
Azure Data Factory, Sql, Databricks, Python, Azure Synapse, Pyspark, Azure Services
Skills:
snowflake , Power Automate, Azure Sql, Sql, SQL Server, Azure Data Factory, Github, Pyspark, Power Bi, Azure Databricks, Oracle, Azure Synapse, Scala, Azure DevOps, Microsoft Fabric, ADLS Gen2, Power Apps, Draw.io, Ms Office Suite, Azure Event Hub, Lucidchart
We don’t charge any money for job offers