We're looking for a experienced data engineer or analytics engineer with deep hands-on expertise in DBT and Snowflake to help us build and maintain a scalable, production-grade analytics pipeline.
This role is focused on data transformation, modeling, testing, and documentation using DBT, alongside advanced Snowflake performance tuning, optimization, and data security. You'll collaborate with data analysts, product stakeholders, and engineers to ensure the data environment supports reliable, analytics-ready datasets.
Key Responsibilities
- Build modular and reusable DBT models using SQL
- Create refined staging and mart models following best practices (e.g., bronze/silver/gold layers or staging/intermediate/mart)
- Design and manage data warehouses, schemas, tables, and views in Snowflake
- Optimize compute costs via proper warehouse sizing, caching, and query optimization
- Implement DBT tests (unique, not null, relationships, accepted values) to ensure data quality
- Manage DBT documentation via dbt docs and maintain up-to-date model descriptions and metadata
- Use DBT sources and snapshots to track slowly changing dimensions and upstream lineage
- Configure DBT environments using dbt Cloud or dbt Core + GitHub/GitLab CI
- Configure role-based access control (RBAC) for secure data access and data governance
- Utilize Snowflake features such as:
- Snowpark, Streams & Tasks for incremental loading
- Time Travel & Fail-safe for recovery and auditing
- Materialized views for query performance
- Monitor and tune long-running queries, optimize storage and clustering