Search by job, company or skills

Exponentia.ai

Sr Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 6 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Exponentia.ai

Exponentia.ai is a fast-growing AI-first technology services company, partnering with enterprises to shape and accelerate their journey to AI maturity. With a presence across the US, UK, UAE, India, and Singapore, we bring together deep domain knowledge, cloud-scale engineering, and cutting-edge artificial intelligence to help our clients transform into agile, insight-driven organizations.

We are proud partners with global technology leaders such as Databricks, Microsoft, AWS, and Qlik, and have been consistently recognized for innovation, delivery excellence, and trusted advisories.

Awards & Recognitions

  • Innovation Partner of the Year Databricks, 2024
  • Digital Impact Award, UK 2024 (TMT Sector)
  • Rising Star APJ Databricks Partner Awards 2023
  • Qlik's Most Enabled Partner APAC

With a team of 450+ AI engineers, data scientists, and consultants, we are on a mission to redefine how work is done, by combining human intelligence with AI agents to deliver exponential outcomes.

Learn more: www.exponentia.ai

About The Role

We are seeking an experienced Senior Data Engineer with strong hands-on expertise in building and delivering Databricks-based data solutions. The ideal candidate will design and develop high-quality data pipelines, optimize Spark workloads, participate in architectural decision-making, and guide the team in delivering scalable, reliable, and cost-efficient data platforms.

Key Responsibilities

Databricks & Data Engineering

  • Design, develop, and optimize data pipelines using Databricks (SQL, Python, PySpark).
  • Work with Delta Lake, Lakehouse architecture, and Unity Catalog for governance and lineage.
  • Implement and manage Databricks workflows, cluster configurations, and job scheduling.
  • Troubleshoot and optimize Spark jobs for performance, reliability, and cost efficiency.
  • Integrate Databricks with cloud platforms (AWS/Azure/GCP), data lakes, and external systems.

Technical Ownership & Development

  • Own the end-to-end design and delivery of complex engineering and data solutions.
  • Build scalable ETL/ELT pipelines and collaborate with architects on solution design.
  • Review code, enforce best practices, and maintain high engineering standards.
  • Create reusable components, frameworks, and automation scripts.

Collaboration & Leadership

  • Mentor junior/mid-level engineers on Databricks, PySpark, and data engineering best practices.
  • Collaborate closely with product owners, architects, and cross-functional teams.
  • Participate in sprint planning, design reviews, and architectural discussions.

Quality, Testing & DevOps

  • Implement CI/CD pipelines for Databricks (Databricks Repos, GitHub/Azure DevOps/GitLab).
  • Ensure high-quality deliverables through unit testing, data validation, and automated checks.
  • Support production deployments, monitoring, and incident resolution.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 137379229