Search by job, company or skills

Sagility

Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 16 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Role Summary:

We are looking for a Data Engineer with experience in Azure Databricks, PySpark, and SQL to build and maintain ETL/ELT data pipelines. The role involves working with large datasets, performing data transformations, and supporting the migration of data from Oracle systems to the Azure cloud. The candidate will also work with BI and analytics teams to prepare reliable datasets for reporting and insights. Strong skills in SQL, Python, data validation, and performance optimization are required.

Key Responsibilities

1. Cloud & Data Engineering Execution

  • Develop and maintain ETL/ELT pipelines using Azure Databricks and Azure Synapse Analytics.
  • Build and optimize data transformations using PySpark.
  • Support migration of data workflows from Oracle-based environments to Azure Cloud (as required).
  • Perform data validation and reconciliation during cloud transition phases.

2. SQL & Database Development

  • Write and optimize complex SQL queries across Oracle, Azure SQL, and Synapse.
  • Support backend data preparation for Power BI and analytics use cases.
  • Use tools such as TOAD for data extraction, validation, and troubleshooting.

3. Analytics & BI Enablement

  • Collaborate with BI Developers and Data Scientists to translate data requirements into structured, optimized datasets.
  • Build reusable datasets to support the evolving HubSpoke data operating model.
  • Ensure data accuracy, integrity, and performance optimization before publishing datasets.

4. Cloud Readiness & Performance Awareness

  • Work within Azure Databricks cluster environments.
  • Support efficient pipeline scheduling and resource utilization.
  • Follow best practices in distributed data processing and cloud-based transformations.

5. Collaboration & Delivery

  • Work closely with Humana data teams, Product Owners, and Sagility stakeholders.
  • Participate in sprint planning and backlog discussions.
  • Maintain documentation of pipelines, transformations, and data logic.
  • Ensure adherence to client data governance and compliance standards.

Required Skills

  • 6+ years of experience in Data Engineering / ETL development.
  • Strong SQL expertise (Oracle mandatory; Azure SQL preferred).
  • Hands-on experience with Azure Databricks and PySpark.
  • Experience building ETL/ELT pipelines.
  • Working knowledge of Azure Synapse Analytics.
  • Proficiency in Python; R exposure is a plus.
  • Strong debugging, data validation, and performance tuning skills.

Preferred (Not Mandatory)

  • Exposure to cloud migration or modernization initiatives.
  • Knowledge of US Healthcare / Payer datasets (e.g., claims, membership, provider, eligibility).
  • Experience supporting BI backend data preparation (Power BI).
  • Familiarity with distributed data processing concepts.
  • Experience working in client-controlled enterprise environments.

Interview Rounds 3 rounds

Shift Timings 2 PM to 11 PM

Location: Bangalore

Work Mode 2 or 3 days WFO

NP 0 to 30 days

Regards,

[Confidential Information]

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144183941

Similar Jobs