Job Description
Duties
Role Summary
We are looking for a mid-level Data Engineer to build and maintain data pipelines connecting diverse source systems to Snowflake. You will ensure pipeline reliability, security, and compliance while collaborating with other data engineers to uphold engineering best practices.
Responsibilities
Pipeline Development: Design, build, and deploy ELT/ETL pipelines that ingest data from SaaS platforms, internal applications, APIs, databases, and other sources into Snowflake.
Pipeline Operations: Monitor, troubleshoot, and optimize pipelines for performance, reliability, and cost efficiency. Maintain SLAs for data freshness and accuracy.
Security & Vulnerability Management: Identify and remediate vulnerabilities across the pipeline stack including secrets management, encryption, access controls, and dependency patching.
Compliance & Governance: Ensure data infrastructure meets security, policy, and risk requirements. Support audits with evidence of controls, access policies, and data lineage.
Data Architecture: Contribute to designing and evolving the data architecture evaluate current patterns, propose improvements, and implement scalable solutions that support longterm platform growth.
Collaboration: Work with fellow data engineers on shared standards for code review, testing, deployment, and documentation. Engage cross-functional stakeholders to align pipelines with business needs.
Skills
Nice to Have
Infrastructure-as-code (Terraform, CloudFormation)
Streaming platforms (Kafka, Kinesis)
Data quality frameworks (Great Expectations, dbt tests)
Snowflake SnowPro or cloud certifications
Education
Bachelor's degree in Computer Science with Mathematics and Statistics or equivalent subject, Masters preferred