Role Overview
We are seeking a
Senior Azure Data Engineer (47 years) with hands-on experience in building, optimizing, and supporting scalable data pipelines on the
Microsoft Azure platform. The role emphasizes
Azure Databricks,
ADF, and
ADLS, with ownership of end-to-end data engineering solutions in production environments.
Key Responsibilities
- Design, develop, and maintain end-to-end batch and incremental data pipelines on Azure
- Build ETL/ELT workflows using Azure Databricks (PySpark, Spark SQL)
- Orchestrate pipelines using Azure Data Factory (ADF)
- Implement and manage Delta Lake (ACID, schema evolution, time travel)
- Optimize Spark jobs for performance, scalability, and cost
- Integrate data from RDBMS, files, APIs, and cloud storage
- Ensure data quality, monitoring, and reliability in production
- Collaborate with analytics, BI, and downstream consumers
- Participate in code reviews and mentor junior engineers
Required Skills
Core Technical Skills
- 47 years of experience in Data Engineering
- Strong hands-on experience with Azure Databricks
- Proficiency in PySpark and Spark SQL
- Experience with Azure Data Factory (ADF) for orchestration
- Hands-on experience with ADLS Gen2
- Strong SQL skills and understanding of data modeling
Azure & Cloud
- Azure Storage, Azure Key Vault
- Basic understanding of Azure security (AAD, RBAC)
- Exposure to Azure Synapse (preferred)
Tools & Practices
- Git-based version control
- Workflow scheduling and monitoring
- Performance tuning and cost optimization
Good to Have
- Experience with streaming data (Kafka / Event Hubs / Spark Streaming)
- ML exposure using MLflow / Databricks ML
- Unity Catalog & data governance
- Python scripting beyond Spark