Search by job, company or skills

A

Data Platform Engineer

Save
new job description bg glownew job description bg glow
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Project Role : Data Platform Engineer

Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.

Must have skills : Microsoft Azure Databricks

Good to have skills : NA

Minimum 3 Year(s) Of Experience Is Required

Educational Qualification : 15 years full time education

Summary:

As a Microsoft Azure Databricks Engineer, you will be responsible for designing, developing, and optimizing large-scale data processing solutions using Azure Databricks. You will act as a senior contributor within the team, translating business and data requirements into scalable Spark-based pipelines and curated datasets. Your daily activities will include building Databricks notebooks, implementing data transformations, optimizing jobs and clusters, and ensuring data quality and performance. You will collaborate closely with cross-functional teams, guide junior analysts, and contribute to technical decisions that enhance reliability, efficiency, and standardization of Databricks solutions.

Roles & Responsibilities: -

Act as a Senior Analyst and subject matter contributor for Azure Databricks–based data engineering solutions.

Analyze business and data requirements and translate them into technical designs aligned with Databricks and Spark best practices.

Design and develop data ingestion and transformation pipelines using Azure Databricks and Azure Data Factory.

Build and manage Delta Lake–based data layers on Azure Data Lake Storage Gen2.

Develop and optimize Spark-based transformations using PySpark and Spark SQL.

Implement medallion architecture patterns (Bronze, Silver, Gold) for raw, curated, and serving datasets.

Optimize cluster configuration, job execution, and data processing performance.

Perform data validation and reconciliation to ensure data accuracy and completeness.

Collaborate with architects and team leads on design decisions and solution improvements.

Troubleshoot and resolve Databricks job failures, data issues, and performance bottlenecks.

Participate in code reviews, notebook optimization, documentation, and knowledge-sharing sessions.

Support production deployments, monitoring, and ongoing enhancements.

Professional & Technical Skills: - Must-Have Skills with Microsoft Azure Databricks as listed below

Hands-on experience with Azure Databricks, including:

  • Notebook development and job execution
  • Cluster usage and configuration

Strong proficiency in PySpark and Spark SQL for data processing and transformations.

Strong proficiency in SQL for analytical querying and modeling.

Understanding of data engineering and data warehousing concepts.

Familiarity with batch and incremental data processing patterns.

Ability to debug and optimize Spark jobs and Databricks workflows.

Basic understanding of Git-based version control and deployment practices.

Experience working with Delta Lake and Azure Data Lake Storage Gen2.

Additional Information: -

The candidate should have 4–5 years of experience in data engineering or big data platforms.

This position is based at our Bengaluru, Chennai, Hyderabad office.

Minimum 15 years of full-time education (or equivalent).

Agile/project-based delivery with close collaboration across teams.

Independently handles assigned Databricks pipelines or modules.

Actively contributes to improving performance and coding standards.

Provides reliable support during testing and production issues., 15 years full time education

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147485987

Similar Jobs

Bengaluru, India

Skills:

Spark SQLAzure Data FactoryPower BiAzure Synapse AnalyticsPysparkSqlAzure Data Lake Storage Gen2Microsoft Azure Analytics Services

Bengaluru, India

Skills:

JenkinsTerraformAWS IAMPythonSqlAWSDevOps pipelinescodepipeline

Bengaluru, India

Skills:

S3Metadata ManagementPysparkStreamingPythonAWSVersion ControlTestingScalaSqlGcpSparkData GovernanceDatabricksAzurejob scheduling workflowsstorage formatssecurity access controlsdata transformationsdata compliancelarge-scale data pipelinesADLSbest practicesDelta Lakedistributed data processingcloud-native data architecture

Bengaluru, India

Skills:

snowflake ServicenowPrometheusBashGrafanaGitMLopsTerraformLinuxAnsibleKubernetesPythonCI CDDataikuModelOps

Bengaluru, India

Skills:

snowflake JavaApache SparkKafkaJsonAvroSqlDatabricksSybase IqKubernetesPythonParquetApache IcebergCI CD toolingHadoop ecosystem technologies