Search by job, company or skills

  • Posted a month ago
  • Be among the first 10 applicants
Early Applicant

Job Description

We are looking for a skilled Snowflake/Datawarehouse Data Engineer to design, build, and optimize our Datawarehouse data architecture. In this role, you will be responsible for developing scalable data pipelines, managing our Snowflake data warehouse, and ensuring that our data is accurate, secure, and highly available for analysts and business stakeholders.

  • Pipeline Development: Design, construct, and maintain scalable data pipelines (ETL/ELT) to ingest data from various source systems (APIs, relational databases, flat files, event streams) into Snowflake.
  • Snowflake Architecture & Optimization: Build and manage Snowflake environments. Optimize virtual warehouses, scale compute resources efficiently to manage costs, and handle performance tuning for complex SQL queries.
  • Data Modelling: Design robust logical and physical data models (e.g., Star Schema, Data Vault) optimized for analytical workloads and reporting.
  • Advanced Snowflake Features: Implement and manage Snowflake-native features such as Snowpipe for continuous data ingestion, Streams and Tasks for automated workflows, Time Travel, Zero-Copy Cloning, and Secure Data Sharing.
  • Orchestration & Automation: Schedule and orchestrate data workflows using tools like Apache Airflow, Prefect, or Dagster.
  • Data Quality & Governance: Implement data validation checks, monitor pipeline health, and ensure compliance with data security and privacy standards (e.g., GDPR, CCPA) using Snowflake's role-based access control (RBAC) and dynamic data masking.

Required Skills & Qualifications:

  • Experience: 8 years, with 4+ years of experience in Data Engineering, hands-on, dedicated experience managing a Snowflake environment.
  • Languages: Advanced proficiency in SQL (complex joins, analytical functions, query optimization) and strong programming skills in Python (or Scala/Java).
  • Cloud Platforms: Hands-on experience with at least one major cloud provider (AWS, GCP, or Azure) and their native data services (e.g., AWS S3, Google Cloud Storage, Azure Blob).
  • Modern Data Stack: Practical experience with modern data transformation and orchestration tools (Airflow, Fivetran, Snowflake tasks/streams).
  • ERP: Knowledge of SAP ERP system would be a added advantage
  • Problem Solving: Strong analytical skills with the ability to troubleshoot complex data pipeline failures and data discrepancies.
  • Streaming Data: Experience working with real-time data streaming technologies like Apache Kafka, Amazon Kinesis, or Snowpipe Streaming.

Work experience in Retail industry would be a added advantage.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 143930017

Similar Jobs