Search by job, company or skills

C

Microsoft Fabric Data Associate

new job description bg glownew job description bg glownew job description bg svg
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Company Description

Cloudalytix is a cutting-edge consulting and delivery firm, specializing in cloud, data, and AI solutions for start-ups, SMBs, and large enterprises. Founded by industry veterans, the company bridges the gap between strategic advice and practical execution, enabling clients to achieve measurable results through digital transformation. Cloudalytix creates future-ready, secure, and scalable solutions with a strong emphasis on trust, innovation, and partnership. Cloudalytix serves diverse industries, including financial services, healthcare, manufacturing, retail, and emerging tech, striving to be the most trusted transformation partner in the industry.

Role: Microsoft Fabric Data Associate (Chennai | Onsite | 5 days/week)

Experience: 03 years

Employment Type: Full-time

Location: Chennai (On-site only)

Write to us: You can write to us directly at [Confidential Information]

Role Description

This is a full-time, on-site position located in Chennai for a Microsoft Fabric Data Engineer. The primary responsibilities include building and managing data pipelines and large-scale data processing solutions, developing robust data models, designing ETL processes, and managing data warehouses to facilitate business intelligence efforts. The role requires collaboration with cross-functional teams to support data-driven decision-making and analytics initiatives. Strong technical expertise and problem-solving skills will be key to success in this role. This role is ideal for someone who enjoys hands-on engineering, learning new things quickly, and turning messy data into reliable, well-modeled, and analytics-ready assets.

What you'll do

  • Build & Maintain Data Pipelines

Design, develop, and support scalable ETL/ELT pipelines using

- Microsoft Fabric (Lakehouse, Data Engineering, Data Factory/Data Pipelines, Dataflow Gen2)

- Azure Synapse Analytics (SQL Pools, Spark, Pipelines)

- Azure Data Lake Storage (ADLS)

Ingest data from on-premise systems, files, APIs, and cloud sources.

  • Lakehouse & Data Modeling

- Implement lakehouse / medallion architectures (Bronze/Silver/Gold).

- Build delta/parquet-based datasets and semantic models to support BI and advanced analytics.

- Optimize models for performance, reusability, and self-service consumption.

  • Data Transformation & Processing

- Write efficient SQL and Python (and/or Spark) for data cleansing, transformation, and aggregation.

- Implement reusable transformation frameworks and patterns for common data needs.

  • Quality, Reliability & Governance

- Implement data quality checks, validation rules, and anomaly detection.

- Apply security and governance best practices using tools like Microsoft Purview or equivalent (RBAC, PII handling, data lineage, classifications).

- Ensure pipelines are observable with logging, monitoring, and alerting.

  • DevOps & Lifecycle Management

- Use Git / Azure DevOps (Repos, Pipelines) for version control and CI/CD of data assets.

- Contribute to a structured lifecycle: requirements design development testing deployment support.

  • Collaboration & Stakeholder Support

- Work closely with BI/Analytics teams, architects, and business stakeholders to understand data requirements.

- Provide well-documented datasets and views that enable reporting, dashboards, and AI models.

What you'll do

Must Have

  • 03 years of hands-on experience in data engineering

Fresh graduates / career starters with strong academic projects, internships, or Fabric/Azure lab experience are also encouraged to apply.

  • Practical experience with some or all of the following:
  • Microsoft Fabric components:
  • Lakehouse, Data Engineering, Data Factory / Data Pipelines, Dataflow Gen2, Notebooks.
  • Azure Synapse Analytics:
  • Serverless/Dedicated SQL, Spark, Pipelines.
  • Azure Data Lake Storage (ADLS) and file formats like Delta/Parquet.
  • Strong skills in:
  • SQL (must-have).
  • Python (for data processing, notebooks, or scripting).
  • Good understanding of:
  • ETL/ELT patterns and distributed data processing concepts.
  • Data warehousing / lakehouse concepts and dimensional modeling.
  • Familiarity with:
  • Azure DevOps / Git for code and pipeline version control.
  • Basic security concepts (RBAC, access control, credentials/secrets).
  • DP-600 / DP-700 (Fabric Data Engineer / Power BI)
  • DP-203 (Azure Data Engineer Associate) or similar.

Nice to Have

  • Exposure to:
  • Power BI or Fabric semantic models (datasets, Direct Lake, DAX).
  • Databricks or other cloud analytics platforms.
  • Real-time ingestion (Event Hubs, Stream Analytics, Fabric Real-Time Analytics).
  • Microsoft Purview or other data catalog / governance tools.

Mindset & Softskills

  • Curious, eager to learn, and comfortable working with new tools and patterns.
  • Ability to translate business needs into technical data solutions.
  • Strong communication skills can explain technical topics in simple terms.
  • Enjoys collaboration but can also work independently and take ownership.
  • Organized, with a focus on quality, documentation, and repeatability.
  • Open to feedback, mentoring, and continuous improvement (both giving and receiving).

Why Join us

  • Real Impact: Work on live projects that directly enable analytics, AI, and decision-making.
  • Learning Culture: Exposure to modern Microsoft Fabric & Azure data stack, with opportunities for certifications and continuous upskilling.
  • Growth Path: Clear opportunities to grow into Senior Data Engineer / Architect / Fabric Specialist roles.
  • Collaborative Environment: Work with architects, BI developers, and consultants in a supportive, learning-focused setup.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 137383281