Search by job, company or skills

C

HIH - Evernorth - Software Engineering Senior Advisor

12-17 Years
new job description bg glownew job description bg glownew job description bg svg
  • Posted a month ago
  • Be among the first 50 applicants
Early Applicant
Quick Apply

Job Description

Responsibilities

  • Design and implement end-to-end data pipelines using Python and Airflow, ensuring efficient scheduling, orchestration, and dependency management for complex workflows.
  • Develop high-performance SQL queries for data extraction, transformation, and reporting, with a strong focus on query optimization and scalability across large datasets.
  • Automate and modularize ETL processes leveraging Python scripting and reusable Airflow DAG components, adhering to best practices in software engineering and data governance.
  • Work with AWS cloud services (S3, Glue, Redshift, Aurora RDS, CloudWatch) to enable scalable, secure, and cloud-native data processing and storage solutions.
  • Integrate and transform data from multiple sources such as Alteryx, PostgreSQL, Oracle, and PL/SQL into unified data models to support analytics and reporting needs.

Qualifications

Required Skills:

  • Very strong skills in the following technologies:
  • Databricks
  • PySpark
  • Python
  • SQL
  • Apache Airflow - DAGs, Tasks, Scheduled, Operators etc.
  • AWS Services- S3, EC2, Glue, Redshift, PostgreSQL
  • Strong written and verbal communication skills with the ability to interact with all levels of the organization.
  • Ability to oversee the work of junior data engineers and mentor them
  • Familiarity with agile methodology.
  • Good working knowledge of data virtualization, data warehousing concepts, dimensional models and relational databases, and data management.
  • Git Based Distributed SW Code Management using GitHub etc.
  • SW Code deployment using best CI/CD practices.
  • Create and maintain and continuously improve optimal data pipeline architecture.
  • Solution performance and scalable data storage and retrieval methods for large datasets for heavy data reliant applications.
  • Gain knowledge about large, complex data sets that meet functional / non-functional business requirements.
  • Help the application team with performance tuning of queries and help establish best practices
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Product, Business Analysts, Data science and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Be attentive to alerts or issues related to the database and take actions to resolve anomalies.

Required Experience & Education:

  • 12 + years of experience
  • College degree in any STEM based field.

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

Flexible and connected pharmacy, care and benefit solutions that move organizations and people forward.

Job ID: 137416215