Search by job, company or skills

BAJAJ FINSERV HEALTH

Senior Data Engineer

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 23 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Location Name: Pune Corporate Office - Mantri

Job Purpose

Job Purpose – To effectively design, develop, and manage data solutions using ETL technologies such as Azure Databricks (ADB) , Microsoft Fabric and Azure Data Factory (ADF) and Sql.

To manage and maintain Azure Data-Lake platform including the storage optimisation.

To effectively manage and maintain databricks platform including the admin activities and willing to work on new features like Databricks Apps, Agent Bricks, AI/BI dashboard.

Should be well versed in Python, PySpark and Scala.

Duties And Responsibilities

Key Roles –

  •  Databricks Development: Build and manage Databricks notebooks using SQL and PySpark within a reusable framework.
  •  ETL Development: Design and maintain data integration pipelines in Azure Data Factory.
  •  Database Proficiency: Strong knowledge of SQL and experience with relational databases like SQL Server, MySQL, etc.
  •  CI/CD & Release Management: Utilize DevOps pipelines for code versioning, automated testing, and release.
  •  Cloud Platform: Should have worked in Azure cloud services and architecture best practices.

Key Responsibilities –

  •  Translate business requirements into technical solutions in collaboration with the PMO team.
  •  Own end-to-end delivery of data projects, ensuring on-time execution and adherence to quality standards.
  •  Design technical architecture and guide development efforts for enhancements and new projects.
  •  Develop and maintain robust ETL pipelines and data integration modules across systems.
  •  Ensure high data quality, platform stability, and resolution of critical process issues.
  •  Monitor and resolve performance bottlenecks in data workflows and programs.
  •  Establish best practices, standard operating procedures, and drive their implementation across teams.
  •  Act as a liaison with business users and product managers to support daily data needs and strategic initiatives.
  •  Coordinate with internal and external development teams to troubleshoot and resolve issues efficiently.
  •  Manage workload through effective planning, prioritization, and progress tracking.

Required Qualifications And Experience


Required Skills & Experience –

  •  Azure Databricks – Python , PySpark, SQL - Must Have
  •  Azure Data Factory – For ETL & Data Integrations - Must Have
  •  Microsoft Fabric – Admin & ETL
  •  OOPS Concept Implementation in Python
  •  Knowledge in dashboarding tool like PowerBI - Good to Have


More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147319781

Similar Jobs

Pune, India

Skills:

semantic modeling PysparkAzure DatabricksSqlAzure Data FactoryPandasFastAPIPythonGenAIMetrics Layer designDelta LakeDatabricks GeniePrompt EngineeringETL pipelines

Pune, India

Skills:

snowflake Machine LearningMetadata ManagementPysparkSqlJenkinsData QualitySpark StreamingApache KafkaGitlabData GovernancePythondata mesh conceptsAI workloads

Pune, India

Skills:

MySQLData WarehousingEtl ToolsData ModelingPythonEtldata pipelines

Pune, India

Skills:

PysparkDatabricksData ModelingData WarehousingPythonSql

Pune, India

Skills:

OauthPython ScriptingAzure Data BricksPower BiPower AutomateSQL ServerAzure Logic AppsSqlAPI DesignAzure Active DirectoryAzure Data FactoryAzure FunctionsOpenID ConnectAzure API ManagementMicrosoft Azure Integration ServicesPower AppsAzure SQL DatabaseEnterprise Integration PatternsAzure Service BusMicroservices ArchitectureBallerinaMicrosoft Power Platform