Search by job, company or skills

E

Senior Data Engineer (GCP)

8-10 Years
5 - 25 LPA
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 18 days ago
  • Be among the first 50 applicants
Early Applicant
Quick Apply

Job Description

JOB DESCRIPTION

We are looking for a Senior Data Engineer to lead the implementation of real-time data pipelines and migration on GCP. The role

involves owning CDC architecture, complex data transformations, and performance optimization, ensuring reliable and scalable

data flow for analytics and AI use cases.

Experience Required:

• Total Experience: 8–10 years

• Relevant Experience: 3–5 years in real-time data pipelines / CDC / modern data platforms

Tools & Projects the Candidate Will Work On

Tools / Technologies

• BigQuery (advanced optimization, large-scale queries)

• Dataflow / Pub-Sub (streaming pipelines)

• CDC Tools (Debezium / Kafka or similar)

• SQL (advanced) + Python

• Source system: Amazon Redshift

Projects

• Lead implementation of CDC pipeline (Redshift → BigQuery, near real-time)

• Drive migration of stored procedures to optimized SQL pipelines

• Design and implement high-performance data transformations

• Optimize data pipelines for latency, scalability, and cost

• Support data architecture decisions and mentor data engineers

EXPERTISE AND QUALIFICATIONS

• Strong expertise in real-time data pipelines and CDC

• Deep understanding of streaming architecture and data flow design

• Advanced SQL and performance tuning skills

• Experience handling large-scale, high-volume data systems

• Ability to troubleshoot complex pipeline and latency issues

Must-Have Skills

• Hands-on experience with CDC tools (Debezium/Kafka or similar)

• Strong experience with BigQuery and query optimization

• Expertise in streaming pipelines (Dataflow / Pub-Sub or equivalent)

• Advanced SQL skills

• Experience in data migration projects

• Strong problem-solving and debugging capability

Good-to-Have Skills

• Experience with Redshift or similar warehouse systems

• Knowledge of partitioning, clustering, and cost optimization

• Exposure to batch + streaming hybrid architectures

• Basic understanding of AI/ML data requirements

Experience in mentoring or leading junior engineers.

BigQuery (hands-on)

Strong SQL (optimization + transformations)

ETL/ELT pipeline development

Real-time/CDC pipelines

GCP services (GCS, Dataflow, Pub/Sub)

Debugging & problem-solving

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

Job ID: 146205021

User Avatar
0 Active Jobs