
Search by job, company or skills
6 month Contract to Hire
(Bangalore Preferred) Hyrbid/ Remote
Required Skills & Experience
Strong experience with big data technologies such as Apache Spark, Hadoop, and Hive.
Handson experience building batch data pipelines with a focus on performance, scalability, SLA adherence, and fault tolerance.
Strong programming skills in Python and/or Scala, with deep experience using Spark for data processing and analytics.
Experience working with GCP services including BigQuery, Google Cloud Storage (GCS), Dataproc, and Pub/Sub.
Solid experience writing and optimizing SQL, preferably BigQuery SQL and/or Spark SQL.
Strong understanding of data modeling, ETL/ELT patterns, and data quality best practices.
Experience with Kafka or similar messaging/streaming platforms.
Familiarity with workflow orchestration tools (e.g., Airflow or Cloud Composer).
Experience deploying and operating data pipelines in production cloud environments (GCP preferred, Azure acceptable).
Strong troubleshooting skills and ability to optimize pipelines under realworld constraints.
Job Description
We are seeking a skilled Data Engineer to design, build, and optimize largescale batch data pipelines in a cloud environment. This role focuses on reliability, performance, and data quality, supporting analytics and downstream consumers through wellengineered bigdata solutions. The ideal candidate has strong experience with Apache Spark, cloud data platforms (GCP preferred), and writing performant SQL at scale.
Key Responsibilities
Design, develop, and maintain batch data pipelines using Apache Spark, Hadoop, Hive, or similar frameworks in a cloud environment.
Build highly optimized, faulttolerant, and SLAdriven data pipelines that operate reliably at scale.
Leverage Google Cloud Platform (GCP) services such as BigQuery, GCS, Dataproc, and Pub/Sub to support data ingestion, processing, and storage.
Write and optimize SQL queries (BigQuery SQL and/or Spark SQL) for data analysis, profiling, and performance tuning.
Collaborate closely with analytics, data science, and downstream consumers to ensure data availability, correctness, and usability.
Monitor and troubleshoot pipeline failures; implement alerting, retries, and data quality checks.
Improve pipeline performance through partitioning, clustering, resource tuning, and query optimization.
Follow software engineering best practices, including version control, testing, and documentation.
Job ID: 146788761