
Search by job, company or skills
About the Role
We are a fast-growing startup looking for a hands-on Data Engineer who can build and own data
pipelines end-to-end, pick up business context quickly, and bring a hustle mindset to everything they do.
You will work closely with business, product, and tech teams to ensure clean, reliable, and scalable data flows across our platform.
Key Responsibilities
• Design, build, and maintain ELT/ETL data pipelines using Python and Snowflake.
• Write clean, production-grade Python code for data ingestion and transformation.
• Model and optimize data in Snowflake — manage schemas, costs, and query performance.
• Collaborate with business stakeholders to understand data requirements and translate them into
robust data models.
• Build and maintain data quality checks, monitoring, and alerting across pipelines.
• Work with orchestration tools (Airflow / Prefect) to schedule and manage pipeline workflows.
• Document pipelines, data models, and processes clearly for team reference.
• Proactively identify data gaps and drive improvements without waiting to be asked.
Requirements
• 2–5 years of experience in data engineering or a related role.
• Strong Python skills — production-ready code, not just scripting.
• Hands-on experience with Snowflake (stages, tasks, time travel, performance tuning).
• Proficiency in SQL — complex joins, window functions, CTEs.
• Experience with dbt, Airflow, Prefect, or similar transformation/orchestration tools.
• Ability to quickly understand business domains and translate requirements into data solutions.
• Familiarity with REST APIs and integrating third-party data sources.
• Comfortable working in a fast-paced, ambiguous startup environment.
• Strong communication skills — can explain technical issues to non-technical stakeholders.
Nice to Have
• Experience with Fivetran, Airbyte, or similar ingestion tools.
• Exposure to streaming pipelines (Kafka, Kinesis).
• Familiarity with cloud platforms — AWS or GCP.
• Prior experience in a B2B SaaS or lead generation environment.
• Knowledge of data warehouse design patterns (Kimball, medallion architecture).
What We Offer
• Competitive salary with equity participation.
• Flexible hybrid / remote work setup.
• Direct ownership and fast career growth in an early-stage team.
• Budget for tools, courses, and professional development.
• A modern, cutting-edge data stack — no legacy baggage
Job ID: 147476801
Skills:
S3, Hadoop, Pyspark, Scala, Big Data, HBase, Jenkins, Lambda, Git, Hive, Spark, Python, AWS, Athena
Skills:
snowflake , Pyspark, Kafka, Python, Sql, AWS
Skills:
BigQuery, Dataproc, Sql, Apache Airflow, Jenkins, Terraform, DataFlow, Python, datastream, Pub Sub, GitLab CI, Cloud Composer
Skills:
Power Bi, Pyspark, Node.js, Sql, React, Javascript, Flask, Databricks, FastAPI, Python, scikit-learn, Azure SQL Database, cleansing validation frameworks, Azure Web Apps, Application Insights, RESTful API design and integration, Data Quality profiling
Skills:
BigQuery, Apache Spark, Kafka, Numpy, Pandas, Gcp, Spark, Databricks, Python, AWS, Airflow, Data Pipelines, Copilot, Claude, Code Cursor
We don’t charge any money for job offers