- Should have hands-on experience of working with ADLS, ADF and Azure SQL DW
- Should have minimum 3 Years working experience of delivering Azure projects.
Must Have:
- 3 to 8 years of experience working on design, develop, and deploy ETL processes on Databricks to support data integration and transformation.
- Optimize and tune Databricks jobs for performance and scalability.
- Experience with Scala and/or Python programming languages.
- Proficiency in SQL for querying and managing data.
- Expertise in ETL (Extract, Transform, Load) processes.
- Knowledge of data modeling and data warehousing concepts.
- Implement best practices for data pipelines, including monitoring, logging, and error handling.
- Excellent problem-solving skills and attention to detail.
- Excellent written and verbal communication skills
- Strong analytical and problem-solving abilities.
- Experience in version control systems (e.g., Git) to manage and track changes to the codebase.
- Document technical designs, processes, and procedures related to Databricks development.
- Stay current with Databricks platform updates and recommend improvements to existing process.
Good to Have:
Agile delivery experience
- Experience with cloud services, particularly Azure (Azure Databricks), AWS (AWS Glue, EMR), or Google Cloud Platform (GCP).
- Knowledge of Agile and Scrum Software Development Methodologies.
- Understanding of data lake architectures.
- Familiarity with tools like Apache NiFi, Talend, or Informatica.
- Skills in designing and implementing data models.