
Search by job, company or skills
Job Title: Data Platform Engineer / Developer
Location- India
Job Description
This is a critical, high-impact role at the core of our client delivery capability. We are seeking a
Data Platform Engineer with architecture depth who can design and build production-grade
data pipelines, lakehouse implementations, and streaming ingestion systems directly within
client environments. You will not only build but also shape the platform architecture - bridging
engineering execution with architectural intent. You will operate across clients in Financial
Services, Healthcare, Retail, Manufacturing, Technology, Energy, Media, Government,
Education, and Hospitality, requiring the ability to adapt platform implementations to diverse
industry data volumes, compliance requirements, and operational contexts.
Key Responsibilities
● Design and implement end-to-end ELT/ETL pipelines including batch, micro-batch, and
real-time streaming patterns
● Build and configure lakehouse platform components covering ingestion, transformation,
storage, and serving layers
● Implement data pipeline orchestration including dependency management, SLA
monitoring, and failure recovery patterns
● Architect and deploy event-driven ingestion systems for high-velocity, high-volume
data streams
● Design and enforce data quality frameworks, schema validation, and data contract
standards within pipelines
● Adapt platform implementations to meet industry-specific data compliance
requirements (e.g., PHI handling in healthcare, PII controls in financial services, data
residency in government)
● Configure and optimize platform environments across client infrastructure including
compute, storage, and networking layers
● Collaborate with Data Architects to ensure engineering implementations faithfully
realize architectural blueprints
● Develop reusable pipeline templates, infrastructure-as-code modules, and platform
accelerators for multi-client reuse
Soft Skills & Culture Fit
Builder mentality, detail-oriented, proactive communicator, collaborative, quality-obsessed,
adaptable to client environments
Industry & Domain Exposure
Candidates should have working experience or demonstrable exposure across 2 or more of our
served industries: Financial Services, Healthcare & Life Sciences, Retail & Consumer Goods,
Manufacturing & Supply Chain, Technology & Telco, Energy & Utilities, Media & Entertainment,
Government & Public Sector, Education, or Hospitality & Travel. Familiarity with at least one of
our core capability areas - Modern Supply Chain & Operations, Finance AI, Growth & Revenue
AI, Legal & HR Next, or Enterprise Office Transformation - is strongly preferred.
Technical Requirements
Must-Have Skills
● Tier 1: SQL and advanced query optimization, Python or Scala for pipeline
development, cloud data platforms (AWS/Azure/GCP), batch and streaming pipeline
design, API and event-based integration, version control and CI/CD for data pipelines
● Tier 2: Lakehouse architecture patterns (Delta/Iceberg/Hudi), data pipeline
orchestration frameworks, stream processing frameworks, data contract and schema
registry design, infrastructure-as-code for data platforms
Nice-to-Have Skills
Familiarity with data mesh platform enablement, semantic layer integration, data observability
tooling, multi-cloud data platform deployments
Job ID: 147469893
Skills:
snowflake , Java, Apache Spark, Kafka, Json, Avro, Sql, Databricks, Sybase Iq, Kubernetes, Python, Parquet, Apache Iceberg, CI CD tooling, Hadoop ecosystem technologies
Skills:
Spark, Emr, Automation, Kubernetes, AWS, Flink, Distributed Data Processing
Skills:
snowflake , Github, Kafka, Bash, Sql, Git, Azure, Kubernetes, Python, AWS, SQLMesh, dbt, IaaC
Skills:
snowflake , Apache Airflow, Gcp, Terraform, Azure, AWS, dbt Core
Skills:
Apache Airflow, Github, Google Cloud Platform, Terraform, Docker, Python, Sql, Encryption, Iam, GitHub Actions, VPC Service Controls, GCP security features
We don’t charge any money for job offers