Job Description
Job Title: Data Engineer (Azure Databricks)
Experience: 5+ Years
Location: [Mention Location]
Employment Type: Full-Time
Job Summary
We are looking for an experienced Data Engineer with strong expertise in Azure Databricks and SQL to design, build, and optimize scalable data pipelines. The ideal candidate will have hands-on experience in developing ETL workflows, handling structured and unstructured data, and building efficient data models to support analytical and reporting needs.
Key Responsibilities
Design and implement robust data pipelines (ETL/ELT) using Azure Databricks.
Extract, transform, clean, and enrich data from multiple structured and unstructured data sources.
Develop scalable data models to support business intelligence and analytics requirements.
Handle unstructured textual data (logs, JSON, text files, etc.) alongside structured datasets.
Optimize Spark jobs, queries, and pipelines for performance and cost efficiency.
Monitor data workflows and troubleshoot data processing issues.
Collaborate with data architects, analysts, and business stakeholders to understand requirements.
Ensure data quality, integrity, and governance standards are maintained.
Implement performance tuning strategies for SQL queries and Spark workloads.
Required Technical Skills
Strong hands-on experience with Azure Databricks
Experience with PySpark / Spark SQL
Strong proficiency in SQL
Experience with Azure Data Lake (ADLS Gen2)
Experience building end-to-end ETL pipelines
Knowledge of Delta Lake
Experience in data modeling (Star schema, Fact/Dimension design)
Experience handling unstructured data formats (JSON, XML, logs, text)
Good to Have
Experience with Azure Data Factory (ADF)
CI/CD implementation in Azure DevOps
Experience with streaming data (Kafka / Structured Streaming)
Understanding of Lakehouse architecture
Exposure to Power BI or other BI tools
Educational Qualification
Bachelor's degree in Computer Science, Engineering, or related field
Skills: azure,data bricks,sql