Search by job, company or skills

Adarsh Solutions Private Limited

Databricks Data Platform Engineer / Data Engineer

3-6 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted 17 hours ago
  • Be among the first 30 applicants
Early Applicant
Quick Apply

Job Description

Hiring: Databricks Data Platform Engineer / Data Engineer

Location: Bangalore

Experience: 3+ Years

Employment Type: Permanent

Role Overview

We are looking for an experienced Databricks Data Platform Engineer to join our specialist team working on Data Engineering, Data Science, and Geospatial projects and products.

In this role, you will leverage Databricks, cloud platforms, SQL, and modern data engineering practices to build scalable, reliable, and high-performance data platforms aligned with enterprise data architecture standards.

If you are detail-oriented, analytical, and passionate about building robust data pipelines, we'd love to hear from you.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL/ELT workflows using Databricks, Google BigQuery, SQL, and cloud-native tools.
  • Build and optimize batch and streaming pipelines to support analytics, reporting, and BI use cases.
  • Collaborate with business stakeholders, product teams, analytics engineers, and data analysts to gather requirements and deliver data solutions.
  • Develop and manage data models, schemas, and transformations, ensuring data quality, integrity, and consistency.
  • Optimize SQL queries, partitioning, clustering, and indexing for performance and cost efficiency.
  • Support BI tools and dashboards by providing clean, reliable, analytics-ready datasets.
  • Implement and monitor data quality checks, validation rules, and error handling across pipelines.
  • Troubleshoot and resolve pipeline failures, performance bottlenecks, and data inconsistencies across dev, test, and prod environments.
  • Ensure compliance with data governance, security, access controls, and privacy standards.
  • Work directly with clients and external stakeholders to gather requirements, present deliverables, and manage expectations.

Requirements & Skills

  • 36 years of experience as a Data Engineer, Analytics Engineer, or ETL Developer.
  • Advanced proficiency in SQL (complex queries, window functions, performance tuning).
  • Strong hands-on experience with Google BigQuery (partitioning, clustering, cost optimization).
  • Experience building data pipelines using Databricks (Apache Spark, Delta Lake).
  • Solid understanding of ETL/ELT architecture, data warehousing, dimensional modeling, and star/snowflake schemas.
  • Experience with Python and/or Scala for data processing and automation.
  • Familiarity with cloud platforms such as GCP, Azure, or AWS.
  • Experience with orchestration tools like Airflow, Databricks Workflows, or similar (preferred).
  • Knowledge of data governance, security, IAM, and compliance frameworks is a plus.
  • Strong client-facing, communication, and problem-solving skills.
  • Ability to work independently in a hybrid or remote environment.

Why Join Us

  • Work on modern, cloud-native data platforms
  • Exposure to data engineering, data science, and geospatial solutions
  • Collaborative environment with strong technical ownership

Interested candidates share your cv to mail - [Confidential Information]

For regular job updates kindly join our company LinkedIn group.

https://www.linkedin.com/groups/14581025/

More Info

Job Type:
Function:
Employment Type:
Open to candidates from:
Indian

Job ID: 138491773