Search by job, company or skills

PwC India

Databricks Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 21 hours ago
  • Be among the first 20 applicants
Early Applicant

Job Description

Job Description & Summary:

We are seeking an experienced Data Engineer – Databricks to design, build, and operate scalable, high-performance data pipelines on the Databricks Lakehouse Platform. The role involves hands-on development using Apache Spark, Databricks notebooks, Delta Lake, and cloud-native services, along with close collaboration with analytics, AI/ML, and business teams

Job Position Title:

Senior Associate_ Data Engineer Databricks_Data and Analytics_Advisory_ Bangalore

Responsibilities:

  • Design, build, and maintain end-to-end data pipelines using Databricks (PySpark / Spark SQL)

• Implement batch and incremental data processing using Delta Lake and multi-hop architecture

• Develop and optimize Databricks notebooks, jobs, and workflows

• Ingest, transform, and curate large-scale structured and semi-structured datasets

• Support analytics, reporting, and downstream data consumption use cases

• Ensure data quality, reliability, lineage, and governance

• Collaborate with data scientists, analysts, and architects on AI/ML workloads

• Optimize Spark jobs for performance and cost efficiency

• Adhere to enterprise security, access control, and compliance standards

• Provide production support and troubleshoot data pipeline issues

• Document technical designs, data flows, and operational runbooks

• Mentor junior engineers and contribute to best practices

Mandatory skill sets:

3+ years of experience as a Data Engineer with strong Databricks expertise

• Hands-on experience with Apache Spark, PySpark, and Spark SQL

• Strong knowledge of Delta Lake and Lakehouse architecture

• Advanced SQL skills

• Experience with ETL/ELT patterns and data warehousing concepts

• Exposure to at least one cloud platform (Azure / AWS / GCP)

• Understanding of distributed computing concepts

• Experience working in Agile teams

Preferred skill sets:

Experience with Unity Catalog and data governance

• Exposure to Auto Loader or streaming frameworks

• CI/CD for data pipelines

• Python for data engineering and automation

• Databricks certification (Associate / Professional)

Years of experience required:

3 to 8 years

Education qualification:

Bachelor's or Master's degree in Computer Science, Engineering, or related field (60% above)

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146787083

Similar Jobs