Search by job, company or skills

dataplatr

GCP Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Title : GCP Data Engineer

Experience : 8 to 12 Years

Location : Coimbatore - On-site

Employment Type : Full-Time

Job Summary

We are looking for a skilled and experienced GCP Data Engineer to design, build, and optimize scalable data pipelines and data platforms on Google Cloud.

The ideal candidate should have strong expertise in SQL, ETL/ELT processes, Databricks, Snowflake, and modern BI tools, with a solid understanding of data architecture and cloud-based data solutions.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines using GCP services.
  • Build and optimize ETL/ELT workflows for data ingestion, transformation, and loading.
  • Work with Databricks for big data processing and advanced analytics.
  • Develop and manage data models and warehousing solutions using Snowflake.
  • Write complex and optimized SQL queries for large datasets.
  • Integrate data from multiple sources (APIs, databases, streaming platforms).
  • Ensure data quality, integrity, and governance across pipelines.
  • Collaborate with data analysts, BI teams, and stakeholders to deliver actionable insights.
  • Support reporting and analytics by integrating with BI tools (Power BI, Tableau, Looker, etc.
  • Monitor and troubleshoot data workflows and performance issues.
  • Implement best practices for data security, scalability, and cost optimization in GCP.

Required Skills & Qualifications

  • 8+ years of experience in Data Engineering or related roles.
  • Strong hands-on experience with Google Cloud Platform (GCP) services such as : BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer (Airflow)
  • Expertise in SQL (advanced querying, performance tuning).
  • Experience with Databricks (PySpark / Spark).
  • Strong experience in Snowflake (data modeling, performance optimization).
  • Hands-on experience in building ETL/ELT pipelines.
  • Proficiency in Python / PySpark.
  • Experience with BI tools such as Tableau, Power BI, or Looker.
  • Good understanding of data warehousing concepts and dimensional modeling.
  • Familiarity with version control tools (Git) and CI/CD pipelines.

Nice To Have

  • GCP certifications (e., Professional Data Engineer).
  • Experience with streaming data pipelines (Kafka / Pub/Sub).
  • Knowledge of data governance, security, and compliance.
  • Exposure to ML/AI pipelines on GCP

(ref:hirist.tech)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 145105459