Search by job, company or skills

ciklum india

Senior Data Engineer

Save
new job description bg glownew job description bg glow
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description

Ciklum is looking for a Senior Data Engineer to join our team full-time in India.

We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live.

About The Role

As a Senior Data Engineer, become a part of a cross-functional development team engineering experiences of tomorrow. We are seeking a highly motivated and hands-on Full Stack Data Engineer with strong experience in Microsoft Fabric and modern Azure-based data platforms. The ideal candidate should be capable of working across the full stack of data engineering — from ingestion and transformation to Gold-layer curation, analytics enablement, API integration, and AI-assisted application workflows.

This role requires engineers who can work independently, leverage AI-assisted development for rapid delivery, and collaborate across data, cloud, and lightweight application layers. Exposure to MCP (Model Context Protocol), ReactJS integration, and modern AI-enabled engineering practices is highly preferred.

Responsibilities

  • Fabric Ecosystem & Full Stack Data Engineering
    • Manage OneLake structures and shortcuts for scalable enterprise data access
    • Design scalable Lakehouse solutions using Medallion Architecture (Bronze, Silver, Gold)
    • Build and optimize Delta Lake tables for analytics, reporting, and AI workloads
    • Develop data pipelines using Fabric Data Factory, Spark, and Notebooks
    • Create ingestion and transformation workflows for structured and semi-structured data
    • Implement orchestration, scheduling, monitoring, and recovery for enterprise pipelines
    • Design dimensional models (Star/Snowflake schemas) for BI and semantic layers
    • Build curated Gold-layer datasets for analytics and AI consumption
    • Support integration with Power BI semantic models and reporting platforms
  • Azure, Integration & Full Stack Development
    • Develop batch and incremental pipelines across Azure and external systems
    • Orchestrate ETL/ELT workflows using Fabric Pipelines and Azure Data Factory
    • Integrate Fabric platforms with APIs, AI services, and enterprise applications
    • Support MCP integration, AI workflows, and rapid prototyping initiatives
    • Collaborate on ReactJS-based apps, dashboards, and AI-driven user experiences
    • Automate workflows using Azure Functions, Logic Apps, Git, CI/CD, and Azure DevOps
  • Data Processing, Optimization & AI-Assisted Development
    • Develop and optimize PySpark notebooks for transformation, cleansing, and enrichment
    • Build efficient SQL queries, views, and stored procedures in Fabric Warehouse / Azure SQL
    • Implement optimization techniques including partitioning, caching, and query tuning
    • Monitor pipeline performance, troubleshoot failures, and improve system reliability
    • Implement logging, alerting, and operational best practices
    • Utilize AI-assisted development tools such as GitHub Copilot and modern AI coding assistants
    • Rapidly prototype and deliver scalable engineering solutions with minimal guidance
  • Governance, Security & Collaboration
    • Implement RBAC and secure data access across Fabric workspaces and Azure environments
    • Apply data quality validations and governance best practices
    • Support metadata management and lineage using Microsoft Purview
    • Collaborate with Data Architects, Analysts, BI Developers, Product Teams, and Business Stakeholders
    • Translate business requirements into scalable data and application solutions
    • Participate in Agile delivery processes, code reviews, and pull request workflows
Requirements

  • 6 years in Data Engineering
  • Hands-on exposure to Microsoft Fabric
  • Microsoft Fabric (Data Factory, OneLake, Synapse Data Engineering – basics)
  • Azure Data Services: ADLS Gen2, Azure SQL, Blob Storage
  • Strong SQL (joins, aggregations, performance tuning basics)
  • Python (PySpark) for data transformation
  • Azure Data Factory / Fabric Pipelines

Desirable

  • Azure Functions / Logic Apps
  • Event Hubs / streaming concepts
  • Microsoft Purview (basic exposure)
  • Cosmos DB

What`s in it for you

  • Strong community: Work alongside top professionals in a friendly, open-door environment
  • Growth focus: Take on large-scale projects with a global impact and expand your expertise
  • Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications
  • Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies
  • Care: We've got you covered with company-paid medical insurance, mental health support, and financial & legal consultations

About Us

At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you'll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress.

India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level.

Want to learn more about us Follow us on Instagram , Facebook , LinkedIn .

Explore, empower, engineer with Ciklum!

Interested already We would love to get to know you! Submit your application. We can't wait to see you at Ciklum.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147477769

Similar Jobs

Chennai, India

Skills:

composer Cloud StorageBigQueryGoogle Cloud PlatformDataFlowData GovernancePythonSqlAirflowAccess Control frameworks

Remote

Skills:

data engineering PythonPysparkAWS GlueDockerAWS Batch

Remote

Skills:

CI/CD & DevOpsSpark Streaming / Event HubsAgile environments (Jira / Azure DevOps)REST API integrationsAI/ML solutionsLife Sciences analytics experience

Coimbatore, Chennai, Pune

Skills:

snowflake SnowpipeCortexPythonAdvanced SqlLlmAi

Chennai, India

Skills:

Spark SQLScalaSqlELTAzurePythonAWSEtldata quality frameworksMS Fabric Dataflows Gen2LakehouseDelta Lakemonitoring observability toolsMicrosoft FabricMedallion architecture