Search by job, company or skills

DTDC Express Limited

Data Engineer (Gurgaon)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 19 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Location = Gurgaon Sector 32

No.of years of exp = 2 to 4 years

Budget = 20 to 25 LPA

Skills required for Data Engineer =

  • Python very strong skills
  • Kafka
  • AWS
  • Spark-based data processing jobs
  • CDC = change data capture
  • Apache Airflow
  • Prefect
  • Good at SQL
  • DSA = Data Structures and Algorithms
  • Data lake

Job brief

We are looking for a highly skilled Data Engineer with a strong foundation in programming, data structures and distributed data systems. The ideal candidate should have hands-on experience with Python or Go, deeply experienced in building batch and streaming pipelines using Kafka and Spark and comfortable working in a cloud-native (AWS) environment. This role involves building and optimizing scalable data pipelines that power analytics, reporting and downstream applications. You will work closely with data scientists, BI teams and platform engineers to deliver reliable, high-performance data systems aligned with business goals.

Responsibilities

  • Design, build and maintain scalable batch and streaming data pipelines.
  • Develop real-time data ingestion and processing systems using Kafka.
  • Build and optimize Spark-based data processing jobs (batch and streaming).
  • Write high-quality, production-grade code using Python or Go.
  • Apply strong knowledge of data structures, algorithms and system design to solve complex data problems.
  • Orchestrate workflows using Apache Airflow and other open-source tools.
  • Ensure data quality, reliability and observability across pipelines.
  • Work extensively on AWS (S3, EC2, IAM, EMR / Glue / EKS or similar services).
  • Collaborate with analytics and BI teams to support tools like Apache Superset.
  • Continuously optimize pipeline performance, scalability and cost.

Requirements And Skills

  • Strong proficiency in Python or Go (production-level coding required).
  • Excellent understanding of Data Structures and Algorithms.
  • Hands-on experience with Apache Kafka for real-time streaming pipelines.
  • Strong experience with Apache Spark (batch and structured streaming).
  • Solid understanding of distributed systems and data processing architectures.
  • Proficiency in SQL and working with large-scale datasets.
  • Hands-on experience with Apache Airflow for pipeline orchestration.
  • Experience working with open-source analytics tools such as Apache Superset.
  • Must have relevant experience of 2-4 years.
  • Good To Have Experience with data lake architectures.
  • Understanding of data observability, monitoring and alerting.
  • Exposure to ML data pipelines or feature engineering workflows.
  • Education B.Tech / BE in Computer Science, Information Technology, or a related engineering discipline

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144456135

Similar Jobs