Search by job, company or skills

teknikoz

Fabric Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 13 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Greetings from Teknikoz

Experience : 5+Years

About the Role

We are looking for a handson Fabric Data Engineer who thrives in the details someone who writes the PySpark, builds the pipeline, creates the table, and owns it endtoend in production. This is not an advisory role. You will be deep in Microsoft Fabric and Azure Databricks daily, engineering the data foundations that power enterprise applications and Power BI reporting at scale.

You bring strong Python and PySpark skills, understand the Fabric ecosystem inside out, and know how to structure data so that both applications and analysts can consume it cleanly without workarounds. You care about pipeline reliability, data quality, and the kind of logging and alerting that means your team sleeps well at night.

Roles and Responsibilities

  • Design, build, and maintain data lakehouses on Microsoft Fabric (OneLake), following a layered approach (e.g., Bronze/Silver/Gold or equivalent).
  • Work with lakehouse and warehouse objects in Fabric (tables, notebooks, SQL endpoints, semantic models) to support downstream reporting and applications.
  • Implement incremental data loading, schema management, and performance tuning for Fabricnative workloads.
  • Build and maintain Sparkbased transformation pipelines in Azure Databricks using PySpark and Python, following a modular, testable code structure.
  • Use Databricks notebooks, jobs, and workflows to orchestrate data transformations, including data cleansing, enrichment, and aggregation.
  • Apply data engineering best practices such as medallion architecture, partitioning, and idempotent processing to ensure reliability and scalability.
  • Design and deliver a clean, stable consumption data layer that supports:
  • Enterprise applications (APIbacked services, data products).
  • Power BI semantic models and reports requiring consistent, wellmodeled data.
  • Collaborate with BI and product teams to define contracts, SLAs, and refresh cadences for each consumption endpoint.
  • Define and implement a logging framework for data pipelines so errors, retries, and performance metrics are visible and traceable.
  • Build alerting mechanisms (e.g., via Azure Monitor, Databricks alerts, or Fabricintegrated monitoring) to detect job failures, data drift, and SLA violations.
  • Regularly review pipeline performance, troubleshoot bottlenecks, and optimize Spark clusters and query patterns.
  • Implement data quality checks and validation rules in PySpark and Databricks to catch bad records, schema drift, and businesslogic issues.
  • Work with governance and security teams to align on access controls, cataloging, and lineage within Fabric and Databricks.
  • Partner with analytics, engineering, and product teams to refine data models and improve overall data reliability and testability.

Core Skills

  • Strong Python and PySpark development experience, including writing modular, reusable, and unittestable code.
  • Handson experience with Microsoft Fabric (lakehouse, warehouse, notebooks, pipelines, OneLake) and Azure Databricks.
  • Experience building endtoend data pipelines (ingestion, transformation, consumption) for largescale reporting and application workloads.
  • Operational & Quality Mindset
  • Deep understanding of pipeline reliability, monitoring, logging, and alerting in cloud data platforms.
  • Familiarity with data quality frameworks, datadrift detection, and reconciliation patterns.
  • Collaboration & Communication
  • Ability to work closely with BI/Power BI teams, product owners, and engineers to translate business needs into data models.
  • Comfortable documenting designs, sharing runbooks, and owning data pipelines in production.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145333537

Similar Jobs