Search by job, company or skills

Senior Databricks Data Engineer

Company name confidential
4-8 Years
Save
new job description bg glownew job description bg glow
  • Posted 9 hours ago
  • Be among the first 10 applicants
Early Applicant
Quick Apply

Job Description

Role Overview:

We are looking for a highly skilled and experienced Senior Databricks Data Engineer to join our data engineering team. In this role, you will be responsible for designing, building, and optimizing scalable data pipelines and solutions using the Databricks Lakehouse Platform. You will work closely with data scientists, analysts, and business stakeholders to ensure reliable and efficient data delivery across the organization.

Key Responsibilities:

Design and develop scalable ETL/ELT pipelines using Apache Spark on Databricks.

Build and maintain robust data lakes and data warehouses (Delta Lake, Lakehouse architecture).

Optimize data flows for performance, scalability, and cost-efficiency on the Databricks platform.

Collaborate with data analysts, scientists, and other engineers to integrate data from diverse sources.

Implement data quality checks, monitoring, and alerting solutions.

Develop CI/CD pipelines for data jobs and workflows.

Support the migration of legacy data systems to the Databricks ecosystem.

Contribute to data governance and security best practices.

Required Skills & Qualifications:

4+ years of experience in data engineering.

Strong proficiency in Apache Spark, PySpark, SQL,Python and Delta Lake.

Experience with cloud platforms (Azure, AWS, or GCP), preferably Azure Databricks, Azure Data Factory, and Azure Synapse.

Deep understanding of data modeling, data warehousing, and ETL processes.

Hands-on experience with job orchestration tools like Databricks Workflows, Airflow, or similar.

Solid understanding of CI/CD, version control (Git), and DevOps practices for data projects.

Good to Have:

Experience with AWS services (S3, Glue, Redshift).

Experience with streaming data (Structured Streaming, Kafka).

Familiarity with tools like dbt, MLflow, or Power BI/Tableau.

Experience in a regulated or enterprise environment (finance, healthcare, etc.).

Be part of building something from the ground up in a high-growth, high-impact domain.

Work alongside passionate experts in AI, data, and industry consulting.

Competitive base + uncapped commission structure tied directly to performance.

Remote-first flexibility with real ownership and career growth potential

Perks and Benefits:

Employees are entitled to flexible working hours to support work-life balance.

The company operates on a 5-day work week.

A healthy, inclusive, and collaborative work environment is maintained.

The company organizes Fun Fridays and festive celebrations to foster team spirit.

Employees have access to opportunities for continuous learning and career growth.

An annual company trip is organized for team building and relaxation.

Comprehensive medical insurance benefits are provided to employees.

Performance-based bonuses and annual salary revisions are offered.

A hybrid working model is available, allowing a mix of in-office and remote work as per company policy.

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

Job ID: 147402871

Similar Jobs

Ahmedabad, India

Skills:

PysparkApache SparkSqlAzure SynapseGitAzure Data FactoryGcpDatabricksAzurePythonAWSAirflowDelta LakeDatabricks Workflows