Search by job, company or skills

LumenData

Snowflake-Lead Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 25 days ago
  • Be among the first 20 applicants
Early Applicant

Job Description

Key Responsibilities

  • Lead the design, development, and optimization of data pipelines and data warehouse solutions on Snowflake.
  • Snowflake - Types of Tables, Storage Integration, Internal & External Stages, Streams, Tasks, Views, Materialized Views, Time Travel, Fail Safe, Micro partitions, Warehouses, RBAC, COPY Command, File Formats (CSV, JSON and XML), snowpipe, Stored Procedures (SQL or JavaScript, Python).
  • Develop and maintain dbt models for data transformation, testing, and documentation.
  • dbt create, run and build a model, Scheduling, Running dependency Models, Macros, Jinga Template (Optional)
  • Collaborate with cross-functional teams including data architects, analysts, and business stakeholders to deliver robust data solutions.
  • Ensure high standards of data quality, governance, and security across pipelines and platforms.
  • Leverage Airflow (or other orchestration tools) to schedule and monitor workflows.
  • Integrate data from multiple sources using tools like Fivetran, Qlik Replicate, IDMC (At least one).
  • Provide technical leadership, mentoring, and guidance to junior engineers in the team.
  • Optimize costs, performance, and scalability of cloud-based data environments.
  • Contribute to architectural decisions, code reviews, and best practices.
  • CI/CD BitBucket, GitHub (At least one).
  • Data Model ENTITY (SUB DIM, DIM, FACTS), Data Vault (HUB, LINK, SAT).

Required Skills & Experience

  • 812 years of overall experience in Data Engineering, with at least 34 years in a lead role.
  • Strong hands-on expertise in Snowflake (data modeling, performance tuning, query optimization, security, and cost management).
  • Proficiency in dbt (core concepts, macros, testing, documentation, and deployment).
  • Solid programming skills in Python (for data processing, automation, and integrations).
  • Experience with workflow orchestration tools such as Apache Airflow.
  • Exposure to ELT/ETL tools.
  • Strong understanding of modern data warehouse architectures, data governance, and cloud-native environments.
  • Excellent problem-solving, communication, and leadership skills.

Good to Have

  • Hands-on experience with Databricks (PySpark, Delta Lake, MLflow).
  • Exposure to other cloud platforms (AWS, Azure, or GCP).
  • Experience in building CI/CD pipelines for data workflows.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 131809343