Search by job, company or skills

S

Senior Databricks Engineer

6-8 Years
Save
new job description bg glownew job description bg glow
  • Posted 2 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description

Shift:

2

Remaining Positions:

1


Details:

We are looking for an experienced Senior Databricks Engineer to design, build, and optimize scalable data solutions on the Databricks platform. The ideal candidate will have strong expertise in data engineering, Spark, SQL, Python/Scala, and cloud-based data platforms. This role involves working closely with data architects, analysts, and business teams to deliver reliable, high-performance data pipelines and analytics solutions.

Key Responsibilities

Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark

Build batch and streaming data processing solutions for large-scale datasets

Optimize ETL/ELT workflows for performance, reliability, and cost efficiency

Work with data from multiple sources and integrate it into curated data models

Implement data quality checks, monitoring, and troubleshooting for production pipelines

Collaborate with stakeholders to understand data requirements and translate them into technical solutions

Develop reusable frameworks, notebooks, and libraries for data engineering standards

Ensure adherence to data governance, security, and compliance best practices

Support migration of legacy data workloads to Databricks/cloud platforms

Mentor junior engineers and contribute to engineering best practices

Apply best practices in schema management, data observability, data governance, and performance optimization.
Job Requirements

Details:

Required Qualifications

  • 6+ years of experience in data engineering or related roles
  • Strong hands-on experience with Databricks, Apache Spark, Delta Lake, and Unity Catalog
  • Proficiency in Python, Scala, and SQL
  • Experience with cloud platforms such as Azure, AWS, or GCP
  • Strong understanding of ETL/ELT, data modeling, and data warehousing concepts
  • Experience with orchestration tools such as Airflow, ADF, or similar
  • Familiarity with CI/CD, Git, and deployment automation for data solutions
  • Excellent problem-solving, communication, and collaboration skills
  • Databricks Platform Expertise: Strong proficiency in using the Databricks Lakehouse Platform for data engineering tasks.

Preferred Qualifications

  • Databricks certification
  • Experience with real-time/streaming data processing using Kafka or similar tools
  • Exposure to big data ecosystems and modern data lakehouse architecture
  • Experience working in Agile/Scrum environments
  • Knowledge of data governance, access control, and audit requirements

Education

Bachelor's or master's degree in computer science, Information Technology, Engineering, or a related field


Pay Range:

Based on Experience


More Info

Job Type:
Employment Type:

About Company

We are a global company with 30 years of experience in the market, offering a robust selection of services such as automation, cloud, Internet of Things (IoT) and user experience (UX).Today, we provide a broad portfolio of solutions, combining innovative consulting, marketing, mobility, personalized campaigns and artificial intelligence services with traditional solutions such as service desk, field service, and outsourcing (BPO). We maintain our excellence by investing in technological innovations, the best partnerships, acquisitions of companies worldwide, and the hiring of highly trained professionals.

Job ID: 147500825

Similar Jobs

Bengaluru

Skills:

AdbPysparkPythonSynapseDelta TablesUnity Catalog

India

Skills:

Spark SQLPysparkDatabricksPython

Bengaluru, India

Skills:

SparkSqlAzure DatabricksPythonMosaic AIAI PlaygroundUnity CatalogAI FunctionsAgent FrameworkDelta Lake

Hyderabad, Bengaluru, Noida

Skills:

AzureawsDatabricks

Pune, India

Skills:

snowflake Data QualityKafkaDatabricksData GovernanceData WarehousingPythonSqlGoogle CloudData ManipulationETL processes