Search by job, company or skills

cg-vak software & exports ltd.

Senior Data Engineer (Dataform, BigQuery)

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 18 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Role & Responsibilities

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP/BigQuery expertise.

Responsibilities

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or dbt to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution

Ideal Candidate

  • Strong Data Engineer Profile
  • Mandatory (Experience 1) – Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Mandatory (Experience 2) – Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Mandatory (Experience 3) – Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Mandatory (Experience 4) – Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Mandatory (Experience 5) - Must have experience in dimensional data modeling, including implementation of Slowly Changing Dimensions (SCD Type 1 and Type 2) in data warehouse environments.
  • Mandatory (Core Skill 1) – Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Mandatory (Core Skill 2) – Must have programming experience in Python and/or SQL for data processing.
  • Mandatory (Core Skill 3) – Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Preferred (Experience) – Exposure to data migration projects and/or data mesh architecture concepts.
  • Preferred (Skill) – Experience with Spark/PySpark or large-scale data processing frameworks.
  • Preferred (Company) – Experience working in product-based companies or data-driven environments.
  • Preferred (Education) – Bachelor's or Master's degree in Computer Science, Engineering, or related field.

Skills: aws,cloud,data,pipeline,gcp,transformation,azure,etl,architecture

More Info

Job Type:
Industry:
Employment Type:

Job ID: 145941389

Similar Jobs