Job Title: Data Engineer (Azure + Databricks)
Job Summary
We are looking for a highly experienced Data Engineer to design and build scalable data platforms using Azure and Databricks.
The role involves leading end-to-end data engineering initiatives, building high-performance data pipelines, and enabling advanced analytics through modern Lakehouse architecture.
Key Responsibilities
- Design and implement scalable data pipelines using PySpark and Databricks
- Build and manage Lakehouse architecture using Delta Lake
- Lead data platform design, including data modelling, ingestion, transformation, and serving layers
- Optimize large-scale data processing jobs for performance and cost efficiency
- Implement data governance, security, and access control (e.g., Unity Catalog)
- Migrate legacy ETL/PLSQL workloads to modern cloud-based architectures
- Develop orchestration workflows using Databricks Workflows / Control-M / ADF
- Collaborate with business, analytics, and engineering teams to deliver data solutions
- Lead code reviews, enforce best practices, and mentor junior engineers
- Ensure CI/CD implementation and DevOps integration for data pipelines
Requirements
Required Skills & Experience
- 8+ years of experience in Data Engineering
- Strong expertise in Python, PySpark, Apache Spark
- Hands-on experience with Databricks (Delta Lake, Lakehouse architecture)
- Experience with Azure Data Platform (ADF, ADLS, Azure Databricks)
- Strong SQL and data modeling skills
- Experience with Snowflake / Oracle / SQL Server
- Knowledge of orchestration tools like Control-M / Airflow / ADF
- Experience in performance tuning and optimization of large datasets
Familiarity with
Git, CI/CD pipelines, and DevOps practices