Job Description
We are seeking a hands-on Sr. Databricks Data Engineer to design, develop, and optimize data pipelines and analytics solutions.
Requirements
Roles & Responsibilities:
The ideal candidate will have strong experience in data engineering, ETL development, and production support, ensuring
reliable, scalable, and high-performing data operations within Azure environment and can work in a fast-paced environment. Knowledge of Supply chain / insurance domain and Power BI is a plus but not mandatory.
Must Have Technical/Functional Skills:
● 8+ years overall experience in data engineering or related fields.
● 2.5 years hands-on experience with Databricks and Spark.
● Strong proficiency in SQL and data analysis techniques.
● Experience with ETL processes, data modeling, and performance tuning.
● Familiarity with Python or Scala for data engineering tasks.
● Excellent problem-solving and communication skills.
Development:
● Design, develop, and deploy scalable ETL/ELT data pipelines using Apache
Spark, PySpark, and Databricks.
● Develop and optimize SQL queries for data transformation and analysis.
● Collaborate with product owners, data architects and analysts to build data
models, delta lake structures, and data workflows.
● Collaborate with data analysts and business teams to deliver actionable insights.
● Build job orchestration and monitoring solutions
● Ensure data quality, performance, and reliability across workflows.
● Develop and maintain CI/CD pipelines for Databricks notebooks, jobs, and workflows.
● Work with cloud-based data platforms (Google preferred).