Search by job, company or skills

PwC India

IN-Senior Associate_ Databricks Senior Data Engineer_Data and Analytics_Advisory_Bangalore

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 days ago
  • Be among the first 30 applicants
Early Applicant

Job Description

Line of Service
Advisory

Industry/Sector
Not Applicable

Specialism
Data, Analytics & AI

Management Level
Senior Associate

Job Description & Summary
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth.

In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.

Why PWCAt PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us.

At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm's growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations.

Job Description & Summary:


We are seekinghands-on Databricks Data Engineerwith 5–7 years of experience who can design, build, andoptimizescalable data pipelines on theDatabricks Lakehouse. The ideal candidate is strong inSQL, Python, andPySpark, understandsDelta Lakeinside out, and is experienced withproduction-grade ETL/ELT,orchestration,cost/performance optimization, anddata governance(e.g., Unity Catalog).

Candidate should have hands-on knowledge of AzureADFandADLS.

Responsibilities:




You will work on:


  • Build reliable, scalablebatch and streamingpipelines usingPySparkandDatabricks Workflows/Jobs.
  • ImplementDelta Lakebest practices: schema enforcement/evolution, ACID transactions, time travel, OPTIMIZE/ZORDER, and VACUUM.
  • DevelopDelta Live Tables (DLT)orStructured Streamingpipelines withAuto Loaderfor ingestion.
  • Write robust modular code inPython(packaging, logging, configuration, error handling).
  • Write efficient and scalableSQLqueries for data extraction and reporting.
  • OptimizeSpark jobs (partitions, joins, bucketing,caching, AQE, broadcast hints, file sizing).
  • Familiarity withSQL Warehouses,Photon,DBRversions, cluster policies, and pools.
  • Workflows/Jobsorchestration: tasks, dependencies, parameters, retries, alerts, schedules.
  • Unity Catalog: permissions, data lineage, audit, external locations, shares; catalog-first design.
  • Ensureidempotent,repayable, andincrementalpipelines with proper checkpoints and watermarks.
  • Implement data quality checks (expectations tests in DLT/Great Expectations), unit/integration tests for pipelines.
  • Ensure data quality, integrity, and governance across the pipeline.
  • Work withCI/CDtools to automate deployment and testing of data workflows.
  • Work with stakeholders to definedataSLAs, schemas, and interface contracts.
  • Document pipelines, data sets (data dictionary, lineage), and usage guidelines.



Primary Skills:
  • Databricks Platform
  • PySpark
  • SQL
  • Data Modelling
  • Python(Scripting, Data Manipulation)
  • AzureADF, ADLS



Secondary Skills:
  • CI/CD (Git, Jenkins, Azure DevOps, etc.)

Mandatory skill sets:



Databricks certified Data Engineer Associate

Preferred skill sets:



Analytical mindset with strong problem-solving ability

Years of experiencerequired:



Experience:57Years



Education Qualification:

Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration, Master of Engineering

Degrees/Field of Study preferred:

Certifications (if blank, certifications not specified)

Required Skills
Microsoft Azure, Microsoft Azure Analytics Services

Optional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline + 27 more

Desired Languages (If blank, desired languages not specified)

Travel Requirements

Available for Work Visa Sponsorship

Government Clearance Required

Job Posting End Date
May 14, 2026

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146878873