Search by job, company or skills

  • Posted 5 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Title: Data Engineer (Python/SQL)

Location: Hyderabad

Job Type: Full-time

About Us

At Best Nanotech, we are [brief mission statement, e.g., revolutionizing supply chain logistics]. We believe that data is our most valuable asset. We are building a world-class Data Platform team to ensure that our Analysts, Data Scientists, and Leadership have access to clean, reliable, and timely data.

The Role We are looking for a Data Engineer to join our infrastructure team. You will be responsible for expanding and optimizing our data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams.

You won't just be moving data from A to B; you will be designing the architecture that makes that movement fast, reliable, and cost-effective.

Key Responsibilities

  • Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines using Python and SQL.
  • Architecture: Help manage our cloud data warehouse (Snowflake / BigQuery / Redshift) and data lake.
  • Orchestration: Schedule and monitor workflows using Airflow, Dagster, or Prefect.
  • Data Quality: Implement automated testing and validation to ensure data accuracy and consistency.
  • Collaboration: Work closely with Data Scientists to prepare data for ML models and with Product teams to capture new data events.
  • Performance Tuning: Optimize slow-running queries and minimize cloud compute costs.

What We Are Looking For

  • Experience: 2+ years of experience in Data Engineering.
  • Coding: Advanced proficiency in SQL (window functions, CTEs are second nature) and Python.
  • Cloud: Experience with a major cloud provider (AWS, GCP, or Azure).
  • Big Data Tools: Familiarity with distributed computing frameworks like Spark or Databricks is a plus.
  • Concepts: Strong understanding of Data Modeling (Star Schema, Snowflake Schema) and Data Warehousing concepts.
  • Engineering Mindset: Experience with Git, CI/CD pipelines, and Containerization (Docker).

Bonus Points

  • Experience with Streaming technologies (Kafka, Kinesis, Flink).
  • Experience with Infrastructure as Code (Terraform).
  • Knowledge of dbt (Data Build Tool).
  • Why Join Us
  • Impact: Your work will directly influence company strategy and product features.
  • Compensation: Competitive salary + Equity/Stock Options.
  • Learning: Budget for certifications (AWS/GCP), conferences, and workshops.
  • Culture: A no-blame culture that values learning from incidents.
  • Health: Premium medical, dental, and vision coverage.

Ready to apply Click Easy Apply or drop your resume at [Confidential Information].

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 135863259

Similar Jobs

(estd)