Search by job, company or skills

Aditi Tech Consulting Private Limited

Databricks Engineer

6-8 Years
Save
new job description bg glownew job description bg glow
  • Posted 13 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Summary:

As a Databricks Engineer, you will play a key role in building and operating high-quality data platforms that support analytics, machine learning, and critical business decision-making. This role requires strong end-to-end ownership of data pipelines, deep hands-on expertise with Databricks, and close collaboration with engineering, data science, and product teams to deliver reliable, scalable, and production-ready data solutions.

Responsibilities:

  • Data Engineering & Pipeline Ownership:
    • Own the end-to-end lifecycle of data pipelines, including data ingestion, data cleansing, transformation, and ML inference.
    • Design, build, and operate robust, scalable, and performant data pipelines using Databricks.
    • Ensure data solutions meet high standards for data quality, reliability, scalability, and operational excellence.
  • Databricks Platform Ownership:
    • Take ownership of the team&rsquos Databricks workspace, including workspace configuration and optimization.
    • Manage clusters and jobs, security, access controls, and governance.
    • Define and promote best practices and standards.
    • Drive continuous improvement in Databricks usage, performance, and cost efficiency.
  • Software Development Lifecycle:
    • Contribute across the full development lifecycle, including requirements analysis and solution design.
    • Implement and automate tests, deploy, and ensure production readiness.
    • Maintain and support ongoing systems, applying engineering best practices such as clean code, code reviews, CI/CD integration, and documentation.
  • Collaboration & Delivery:
    • Partner closely with software engineers, data scientists, and product teams to deliver reliable, high-quality data solutions.
    • Translate analytical and business requirements into well-designed, production-grade data pipelines.
    • Support troubleshooting, incident resolution, and continuous improvement in production environments.

Requirements:

  • Bachelor&rsquos degree in Information Technology, Computer Science, or equivalent education.
  • 6 years of experience in the software engineering field.
  • Hands-on experience with Apache Spark or Databricks, preferably using Python or Java.
  • Proven experience building large-scale data or ETL pipelines that handle high-volume datasets.
  • Experience working in cloud-native environments, ideally on AWS.
  • Strong understanding of software craftsmanship, including writing clean, maintainable, and well-tested code.
  • Experience working in Agile/Scrum environments.
  • Must have knowledge of AI Agents and hands-on using AI Agents.
  • Strong problem-solving skills and ability to work independently on complex problems.
  • Strong communication skills&mdashboth verbal and written&mdashand able to quickly learn and implement new technologies.
  • Strong relationship, collaborative skills, and organizational skills with a high degree of initiative and self-motivation.
  • Willingness and ability to learn and take on challenging opportunities.
  • Knowledge of payments domain and Indian payment ecosystem is desirable.

Required Skills:

  • Databricks & Apache Spark (Expert level)
  • Data Engineering & ETL Pipeline Development (Expert level)
  • Cloud & DevOps (AWS CI/CD) (Advanced level)

Preferred Skills:

  • Working knowledge of Scala, especially in the context of Spark-based workloads.
  • Hands-on experience managing pipeline development, deployment stages, and usage reporting within Databricks.
  • Experience handling regulated or sensitive data, with an understanding of data security, privacy, and compliance requirements.
  • Familiarity with SQL and NoSQL databases and messaging or streaming systems, such as Redis, ElastiCache, DynamoDB, Amazon S3, Kinesis, or Kafka.
  • Experience with monitoring, observability, and alerting tools, and supporting high-traffic, customer-facing platforms, using tools such as Grafana or Prometheus.
  • Experience writing automated acceptance and integration tests that are fully integrated into CI/CD pipelines.
  • Exposure to cloud-native tooling and infrastructure, including AWS, Docker, Kubernetes, and Infrastructure-as-Code tools such as Terraform.


#AditiConsulting
# 26-02670

More Info

Job ID: 146757477

Similar Jobs

Pune, India

Skills:

snowflake PysparkDatabricksSqlPython

Pune

Skills:

AWSDatabricksSparkPythonSqlEtl

Pune

Skills:

AWSPysparkData Bricks

Early Applicant
Pune, India

Skills:

snowflake Data QualityKafkaDatabricksData GovernanceData WarehousingPythonSqlGoogle CloudData ManipulationETL processes

Hyderabad, Bengaluru, Pune

Skills:

SparksqlPL/SQLPysparkSqlDatabricks Engineer