
Search by job, company or skills
We are seeking an experienced Databricks Consultant to design, develop, and optimize enterprise-grade data platforms. The candidate will play a key role in solution implementation, performance tuning, stakeholder communication, and mentoring junior engineers.
Key ResponsibilitiesDesign scalable and secure data architectures using Databricks Lakehouse.
Develop advanced ETL/ELT pipelines and optimize Spark workloads.
Lead data migration projects from traditional warehouses to Databricks.
Implement Delta Lake best practices (partitioning, Z-ordering, optimization).
Design data models (star schema, snowflake schema), Data Vaults.
Ensure governance, security, and access control (Unity Catalog, RBAC).
Collaborate with business stakeholders to translate requirements into technical solutions.
Implement CI/CD pipelines and DevOps best practices for data engineering.
Mentor Associate Consultants and conduct code reviews.
Bachelor's degree in Computer Science, Information Technology, or a related field.
57 years of experience in data engineering.
Strong hands-on expertise in Databricks and Apache Spark.
Deep understanding of performance tuning and Spark optimization techniques.
Strong Experience in at least one major cloud (AWS / Azure / GCP).
Strong SQL, PySpark, Python and Data Modelling.
Experience with orchestration tools (Airflow, ADF, etc.).
Knowledge of data governance and security frameworks.
Experience with real-time data processing.
Exposure to DLT pipelines implementation in Databricks.
Databricks Professional certification.
Impact: Play a pivotal role in shaping a rapidly growing venture studio.
Culture: Thrive in a collaborative, innovative environment that values creativity and ownership.
Growth: Access professional development opportunities and mentorship.
Benefits: Competitive salary, health/wellness packages, and flexible work options.
Job ID: 143854209