About The Role
We are seeking a highly skilled Databricks Developer with 5+ years of experience in big data engineering and cloud-based data platforms. The ideal candidate will have strong expertise in Databricks, distributed data processing, and modern data engineering practices. This role involves designing, developing, and optimizing scalable data pipelines and analytics solutions to support business intelligence and advanced analytics initiatives.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using Databricks.
- Build and optimize ETL/ELT workflows using PySpark, Spark SQL, and Delta Lake.
- Develop and manage data solutions on cloud platforms such as Azure, AWS, or GCP.
- Work with structured and unstructured datasets from multiple sources.
- Implement data quality checks, monitoring, and performance tuning.
- Collaborate with data scientists, analysts, and cross-functional teams to deliver data-driven solutions.
- Ensure data governance, security, and compliance best practices.
- Participate in code reviews, technical design discussions, and documentation.
Required Skills & Qualifications
- 5+ years of experience in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks platform.
- Proficiency in PySpark, Spark SQL, and Python.
- Experience with Delta Lake, Data Lakes, and Data Warehousing concepts.
- Knowledge of cloud platforms (Azure/AWS/GCP).
- Experience with workflow orchestration tools (e.g., Airflow).
- Strong SQL skills and database knowledge.
- Understanding of CI/CD practices and DevOps integration.
- Excellent problem-solving and communication skills.
Job Category: Data Bricks Developer
Job Type: Full Time
Job Location: Banglore & Mysore
Experience: 5+ Years