Search by job, company or skills

odisys global consulting services

Senior Azure Data Engineer

7-9 Years
Save
new job description bg glownew job description bg glow
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

This is a Full Time WORK FROM HOME Position for a US Based Client. Working time will be 2 PM IST till 11 PM IST (Which is coincide with USA Eastern Time Zone).

Min Experience Required :7 Years in Azure Platform, Microsoft Fabric, Direct Lake architecture

Role Overview

We are seeking a highly skilled Senior Azure Data Engineer with deep expertise in Microsoft Fabric, Direct Lake architecture, and the modern Azure data platform. In this role, you will design, build, and operationalize end-to-end data solutions that power enterprise-grade analytics and business intelligence. You will be a key contributor to our client's data transformation journey, owning the engineering of Fabric Lakehouses, Delta/Parquet pipelines, and Direct Lake-enabled Power BI semantic models.

This is a senior-level, hands-on role requiring both architectural depth and execution capability. You will collaborate closely with the Lead BI/Data Architect, BI Developers, and business stakeholders to deliver scalable, performant, and governed data products.

Key Responsibilities

Direct Lake Implementation (Primary Focus)

·        Architect and implement Direct Lake semantic models in Microsoft Fabric using Spark notebooks and Data Factory pipelines.

·        Create and maintain Delta tables in Fabric Lakehouse (OneLake) optimized for Direct Lake query performance.

·        Build Fabric notebooks (PySpark / Scala) to automate Lakehouse table creation, schema evolution, and partition management.

·        Design and execute Data Factory pipelines within Fabric to orchestrate data ingestion into Delta/Parquet format at scale.

·        Configure Direct Lake datasets in Power BI, ensuring framing, fallback behavior, and model refresh strategies are correctly implemented.

·        Perform Delta table optimization tasks including V-Order writes, Z-Order clustering, OPTIMIZE, VACUUM, and file compaction for Direct Lake readiness.

·        Troubleshoot Direct Lake fallback to DirectQuery and implement fixes to maximize in-memory performance.

·        Implement incremental load patterns (CDC, watermark, merge/upsert) in Delta tables to support real-time and near-real-time reporting.

Azure & Fabric Data Engineering

·        Design and build data pipelines using Azure Data Factory (ADF), Fabric Data Factory, and Apache Spark on Fabric.

·        Develop and manage Bronze / Silver / Gold Medallion architecture within Microsoft Fabric Lakehouse.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 147464425

Similar Jobs

Bengaluru, India

Skills:

DevopsAdfPysparkDatabricksAzurePythonP G frameworks like AI factoryUltimate PygenticADLS2

Coimbatore, India

Skills:

Azure Data FactoryAzure FunctionsAzure Synapse AnalyticsData WarehousingAzure Logic AppsDimensional ModelingETL ELT processesSQL Server query optimizationAzure Data Lake Storage ADLS Gen2RedPoint or similar customer data platformsbig data concepts

Bengaluru, India

Skills:

Data ModelingSqlDatabricksELTData FactoryApache SparkARM templatesPowerShellAutomation ToolsEtlStar SchemaPythonVersion ControlAzure Data LakeApplication InsightsAzure MonitorSnowflake SchemaDevOps methodsAzure MigrateLog AnalyticsAzure CLIASRAzure Database Migration ServiceSynapse Analytics

Chennai, India

Skills:

Azure Data FactorySqlDatabricksPythonAzure SynapsePysparkAzure Services

Hyderabad, India

Skills:

snowflake Power AutomateAzure SqlSqlSQL ServerAzure Data FactoryGithubPysparkPower BiAzure DatabricksOracleAzure SynapseScalaAzure DevOpsMicrosoft FabricADLS Gen2Power AppsDraw.ioMs Office SuiteAzure Event HubLucidchart