Project Role : Data Platform Engineer
Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.
Must have skills : Microsoft Fabric
Good to have skills : NA
Minimum 3 Year(s) Of Experience Is Required
Educational Qualification : 15 years full time education
Summary: As a Microsoft Fabric Engineer / Specialist, you will be responsible for designing, developing, and optimizing end-to-end analytics solutions using Microsoft Fabric. You will act as a senior contributor within the team, translating business requirements into reliable, scalable data solutions. Your daily activities will include building and maintaining data pipelines, developing transformation logic, supporting semantic models, and ensuring data quality and performance. You will collaborate closely with cross-functional teams, provide technical guidance to junior analysts, and contribute to decisions that improve solution quality, reusability, and platform standards.
Roles & Responsibilities: -
Act as a Senior Analyst and subject matter contributor for Microsoft Fabric–based solutions.
Analyze business and data requirements and translate them into technical designs aligned with Fabric best practices.
Design and develop data ingestion pipelines using Fabric Data Factory, Dataflows Gen2, and supported connectors.
Build and manage Lakehouse and/or Warehouse objects including tables, views, and curated datasets.
Develop and optimize data transformations using Spark notebooks (PySpark/SQL) and T SQL.
Implement medallion architecture patterns (Bronze, Silver, Gold) for structured and curated data layers.
Support semantic model development and Power BI datasets, including measures, relationships, and basic performance tuning.
Perform data validation and reconciliation to ensure data accuracy and completeness.
Collaborate with leads and architects on design decisions and solution improvements.
Troubleshoot and resolve data pipeline failures and performance issues.
Participate in code reviews, documentation, and knowledge-sharing sessions.
Support production deployments and ongoing enhancements as part of the delivery lifecycle.
Professional & Technical Skills: - Must-Have Skills with Microsoft Fabric as listed below
Hands-on experience with Microsoft Fabric, including:
- Lakehouse, Warehouse, OneLake fundamentals
- Fabric Data Factory pipelines and Dataflows Gen2
Strong proficiency in SQL for querying, transformations, and data modeling.
Working knowledge of PySpark or Spark SQL for data processing.
Understanding of data warehousing concepts (fact/dimension tables, SCDs).
Experience with Power BI datasets and semantic models.
Familiarity with data ingestion patterns (full load, incremental load).
Ability to debug and resolve pipeline and data issues.
Basic understanding of Git-based version control and deployment practices.
Additional Information: - The candidate should have 3–5 years of experience in data engineering, BI, or analytics roles.
This position is based at our Bengaluru, Chennai, Hyderabad office.
Minimum 15 years of full-time education (or equivalent).
Agile/project-based delivery with close collaboration across teams.
Independently handles assigned modules or datasets.
Actively contributes to improving coding standards and solution quality.
Reliable support during testing and production issues., 15 years full time education