About the Job: We are seeking a skilled and experienced Databricks Developer to join our data engineering team. The ideal candidate will have a strong background in big data technologies, cloud platforms (preferably Azure), and hands-on experience with Databricks for building scalable data pipelines and analytics solutions.
About the Role: Senior Analyst Programmer
Location: Gurgaon/Noida/Bangalore/Pune
Responsibilities:
- Design, develop, and optimize data pipelines using Apache Spark on Databricks.
- Implement ETL/ELT workflows to ingest, transform, and load data from various sources.
- Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.
- Ensure data quality, integrity, and governance across all pipelines.
- Monitor and troubleshoot performance issues in Databricks notebooks and jobs.
- Integrate Databricks with cloud services (Azure Data Lake) and enterprise data platforms.
- Participate in code reviews, documentation, and agile ceremonies.
Qualifications:
- 5+ years of experience in data engineering or big data development.
- Strong proficiency in Apache Spark, PySpark, and SQL.
- Hands-on experience with Databricks platform (including notebooks, jobs, clusters).
- Experience with cloud platforms (Azure) and cloud-native data services.
- Familiarity with Delta Lake, MLflow, and Databricks Workflows.
- Knowledge of CI/CD practices and version control (Git).
- Strong problem-solving and communication skills.
Preferred Skills:
- Databricks certification (e.g., Databricks Certified Data Engineer Associate/Professional).
- Experience with data modeling, data warehousing, and BI tools.
- Exposure to DevOps, Terraform, or Infrastructure as Code (IaC).
- Experience working in Agile/Scrum environments.