Search by job, company or skills

fairdeal.market

Senior Data Engineer

Save
new job description bg glownew job description bg glow
  • Posted 7 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About: Fairdeal.Market

Fairdeal.Market is a rapidly growing B2B quick commerce company offering a wide range of products with delivery times as fast as 60 minutes. Our mission is to ensure that every shopping bag worldwide can be filled efficiently and sustainably. As we continue to scale, we are building a strong data backbone to power speed, accuracy, and intelligence across operations — from warehouse to last mile.

Role Overview

The Senior Data Engineer will be the backbone of our data platform team. You will own the design and operation of our core data pipelines, streaming infrastructure, and analytics systems — all at Real India scale. This is not a maintenance role. You will make architectural decisions that directly impact product velocity and business outcomes for hundreds of millions of future users. The ideal candidate is a strong systems thinker, hands-on builder, and comfortable working across ambiguity in a fast-paced environment.

Key Responsibilities

  • Data Pipeline Architecture and Ownership:

Design, build, and maintain scalable, fault-tolerant data pipelines handling high-velocity event streams across the Fairdeal.Market ecosystem. Own the full data lifecycle from ingestion to serving — with a focus on reliability, low latency, and cost efficiency.

  • Data Lakehouse and Platform Engineering:

Architect and evolve our data lakehouse — including ingestion, storage, transformation, and serving layers. Drive standardization of data models across OLAP and OLTP systems. Maintain clean, well-documented schemas and enforce data contracts.

  • Streaming and Real-Time Infrastructure:

Build and manage real-time data streams using Kafka, Flink, or Spark Streaming to power operational dashboards, ML feature pipelines, and alerting systems. Ensure exactly-once semantics and resilient recovery at scale.

  • Data Quality and Observability:

Own data quality, SLA monitoring, and pipeline observability end-to-end. Implement data validation frameworks (Great Expectations or similar), anomaly detection, and alerting. Establish a culture of data reliability across the team.

  • Cross-functional Collaboration:

Partner closely with product, ML, data science, and business intelligence teams to surface the right data at the right latency. Translate complex operational and product requirements into efficient, scalable data solutions.

  • Tooling Evaluation and Build vs. Buy:

Continuously evaluate data tooling across the stack — from orchestration and transformation to storage and serving. Make structured build-vs-buy recommendations with a focus on developer productivity and long-term scalability.

  • Mentorship and Engineering Culture:

Mentor junior data engineers, conduct design reviews, and raise the bar for engineering practices on the team. Contribute to a culture of documentation, code quality, and knowledge sharing.

KPI Ownership

Own and drive improvements across key data platform metrics, including pipeline uptime and SLA adherence, data freshness and latency, data quality and accuracy scores, infrastructure cost per event processed, mean time to detect and resolve data incidents, and query performance across analytical workloads.

Qualifications And Experience

  • Bachelor's degree in Computer Science, Engineering, or a related field; an advanced degree is a plus.
  • 5–9 years of experience building production-grade data pipelines, with at least 3 years at significant scale (tens of millions of events per day).
  • Prior experience in e-commerce, quick commerce, fintech, or other high-velocity consumer tech environments strongly preferred.
  • Proven track record of architecting and scaling distributed data systems end-to-end — from design to production.

Skills

  • Deep expertise in at least one streaming framework: Apache Kafka, Apache Flink, or Spark Streaming.
  • Strong SQL and Python proficiency; comfort with a JVM language (Scala or Java) is a bonus.
  • Hands-on experience with cloud data warehouses: BigQuery, Redshift, Snowflake, or ClickHouse.
  • Solid understanding of distributed systems principles — partitioning, replication, consistency, and exactly-once semantics.
  • Experience with data orchestration tools such as Airflow, Prefect, or Dagster.
  • Familiarity with data transformation frameworks (dbt) and open table formats (Delta Lake, Apache Iceberg).
  • Strong understanding of data modeling for both analytical (star/snowflake schema) and operational workloads.

Attributes

  • Systems thinker who can connect warehouse operations, product event data, and downstream ML pipelines into a coherent architecture.
  • Comfortable working both hands-on in the codebase and in strategic infrastructure discussions with engineering leadership.
  • Highly ownership-driven with the ability to manage ambiguity and drive execution in a fast-paced environment.
  • Sensitivity to Real India constraints — low-bandwidth environments, device diversity, and cost efficiency for emerging market scale.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147482579

Similar Jobs

Gurugram, Gurugram, India

Skills:

BigQueryGoogle Cloud PlatformApache SparkDataprocSqlELTCloud StorageDataFlowPythonEtlAirflowPub Sub

Noida, India

Skills:

snowflake JavaBigQueryScalaKafkaRedshiftSqlELTGcpSparkDatabricksAzurePythonEtlAWSAirflow

Gurugram, India

Skills:

T-sqlPysparkPlsqlSparkData WarehousingSqlETL FundamentalsAdvanced PythonStored ProceduresData Modelling FundamentalsModern Data Platform Fundamentals

Hyderabad, Noida, Chandigarh

Skills:

ApiSqlEtlAzure Senior Data Engineer

Remote

Skills:

data engineering PythonPysparkAWS GlueDockerAWS Batch