
Search by job, company or skills
Position Description:
Your future duties and responsibilities:
We are seeking a skilled Azure Databricks Developer to design, develop, and optimize big data pipelines using Databricks on Azure. The ideal candidate will have strong expertise in PySpark, Azure Data Lake, and data engineering best practices in a cloud environment.
Key Responsibilities:
Design and implement ETL/ELT pipelines using Azure Databricks and PySpark.
Work with structured and unstructured data from diverse sources (e.g., ADLS Gen2, SQL DBs, APIs).
Optimize Spark jobs for performance and cost-efficiency.
Collaborate with data analysts, architects, and business stakeholders to understand data needs.
Develop reusable code components and automate workflows using Azure Data Factory (ADF).
Implement data quality checks, logging, and monitoring.
Participate in code reviews and adhere to software engineering best practices.
Required Skills & Qualifications:
5+ years of experience in Apache Spark / PySpark.#LI-SK38
5+ years working with Azure Databricks and Azure Data Services (ADLS Gen2, ADF, Synapse).
Strong understanding of data warehousing, ETL, and data lake architectures.
Proficiency in Python and SQL.
Experience with Git, CI/CD tools, and version control practices.
Required qualifications to be successful in this role:
Skills: ABD, ADL, Pyspark
Skills:
Industry Type:IT Services & Consulting
Department:Engineering - Software & QA
Employment Type:Full Time, Permanent
Role Category:Software Development
Education
UG:Bachelor of Technology / Bachelor of Engineering (B.Tech/B.E.) in Any Specialization
PG:Any Postgraduate
We are insights-driven and outcomes-focused to help accelerate returns on your investments. Across 21 industry sectors and 400 locations worldwide, we provide comprehensive, scalable and sustainable IT and business consulting services that are informed globally and delivered locally.
Job ID: 141497691