Search by job, company or skills

demand ai

Data Engineer

2-5 Years
Save
new job description bg glownew job description bg glow
  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About the Role

We are a fast-growing startup looking for a hands-on Data Engineer who can build and own data

pipelines end-to-end, pick up business context quickly, and bring a hustle mindset to everything they do.

You will work closely with business, product, and tech teams to ensure clean, reliable, and scalable data flows across our platform.

Key Responsibilities

• Design, build, and maintain ELT/ETL data pipelines using Python and Snowflake.

• Write clean, production-grade Python code for data ingestion and transformation.

• Model and optimize data in Snowflake — manage schemas, costs, and query performance.

• Collaborate with business stakeholders to understand data requirements and translate them into

robust data models.

• Build and maintain data quality checks, monitoring, and alerting across pipelines.

• Work with orchestration tools (Airflow / Prefect) to schedule and manage pipeline workflows.

• Document pipelines, data models, and processes clearly for team reference.

• Proactively identify data gaps and drive improvements without waiting to be asked.

Requirements

• 2–5 years of experience in data engineering or a related role.

• Strong Python skills — production-ready code, not just scripting.

• Hands-on experience with Snowflake (stages, tasks, time travel, performance tuning).

• Proficiency in SQL — complex joins, window functions, CTEs.

• Experience with dbt, Airflow, Prefect, or similar transformation/orchestration tools.

• Ability to quickly understand business domains and translate requirements into data solutions.

• Familiarity with REST APIs and integrating third-party data sources.

• Comfortable working in a fast-paced, ambiguous startup environment.

• Strong communication skills — can explain technical issues to non-technical stakeholders.

Nice to Have

• Experience with Fivetran, Airbyte, or similar ingestion tools.

• Exposure to streaming pipelines (Kafka, Kinesis).

• Familiarity with cloud platforms — AWS or GCP.

• Prior experience in a B2B SaaS or lead generation environment.

• Knowledge of data warehouse design patterns (Kimball, medallion architecture).

What We Offer

• Competitive salary with equity participation.

• Flexible hybrid / remote work setup.

• Direct ownership and fast career growth in an early-stage team.

• Budget for tools, courses, and professional development.

• A modern, cutting-edge data stack — no legacy baggage

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147476801

Similar Jobs

Pune, India

Skills:

S3HadoopPysparkScalaBig DataHBaseJenkinsLambdaGitHiveSparkPythonAWSAthena

Pune, India

Skills:

snowflake PysparkKafkaPythonSqlAWS

Pune, India

Skills:

BigQueryDataprocSqlApache AirflowJenkinsTerraformDataFlowPythondatastreamPub SubGitLab CICloud Composer

Pune, India

Skills:

Power BiPysparkNode.jsSqlReactJavascriptFlaskDatabricksFastAPIPythonscikit-learnAzure SQL Databasecleansing validation frameworksAzure Web AppsApplication InsightsRESTful API design and integrationData Quality profiling

Pune, India

Skills:

BigQueryApache SparkKafkaNumpyPandasGcpSparkDatabricksPythonAWSAirflowData PipelinesCopilotClaudeCode Cursor