Search by job, company or skills

  • Posted 10 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Experience: 7+ years

Location: Bangalore

Skills Required: Databricks, PySpark & Python, SQL, AWS Services

Project Overview & Role Scope

We are seeking highly skilled Data Engineers with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team

Key Responsibilities

  • Design, build, and maintain scalable data pipelines using Databricks and PySpark.
  • Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
  • Implement data integration solutions across AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
  • Collaborate with analytics, data science, and business teams to deliver clean, reliable datasets.
  • Ensure data quality, performance, and reliability across workflows.
  • Participate in code reviews, architecture discussions, and performance optimization.
  • Support migration and modernization of legacy systems to cloud-based solutions.

Key Skills

  • Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
  • Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
  • Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
  • Experience with data modeling, schema design, and performance optimization.
  • Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
  • Excellent problem-solving and communication skills.

More Info

Job Type:
Industry:
Employment Type:

Job ID: 135951377