Description
We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt or Matillion and be able to effectively work in a consulting setup.
In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization.
Key Responsibilities
- Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices.
- Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake.
- Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets..
- Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows.
- Apply dbt best practices : modular SQL development, testing, documentation, and version control.
- Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design.
- Apply CI/CD and Git-based workflows for version-controlled deployments.
- Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks.
- Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets.
- Write well-documented, maintainable code using Git for version control and CI/CD processes.
- Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives.
- Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions.
(ref:hirist.tech)