Search by job, company or skills

H

ETL Developer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 23 days ago
  • Be among the first 50 applicants
Early Applicant

Job Description

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation and navigate constant change. Through a combination of strategy, expertise and creativity, we help clients accelerate operational, digital and cultural transformation, enabling the change they need to own their future.

Join our team as the expert you are now and create your future.

We are looking for a highly skilled Data Integration Engineer to design, build, and manage scalable data pipelines and integration solutions across cloud and on-premises platforms. The role requires strong expertise in ETL/iPaaS tools, APIs, and data platforms, with exposure to AI/ML-driven automation for smarter monitoring, anomaly detection, and data quality improvement.

Requirements

  • 4-7years in data integration and ETL/ELT design, strong skills in Informatica or similar tools, proficiency in SQL and Python, experience with cloud data platforms, and familiarity with AI-driven data quality solutions.
  • Design and optimize data integration workflows, ETL/ELT pipelines, and APIs using Informatica IICS and other iPaaS tools.
  • Develop scalable pipelines across cloud platforms (AWS, Azure, GCP) and modern data warehouses (Snowflake, Databricks, BigQuery, Redshift).
  • Implement data governance frameworks including data quality, lineage, and cataloging to ensure trusted and compliant data flows.
  • Leverage AI/ML techniques for anomaly detection, predictive quality checks, and self-healing pipeline automation.
  • Collaborate with cross-functional teams (architects, analysts, business stakeholders) to integrate structured, semi-structured, and unstructured data sources.
  • Ensure robust deployment practices by integrating DevOps/CI-CD principles into data integration workflows.
  • Document and standardize integration patterns, best practices, and reusable frameworks for enterprise-wide adoption.

Preferences

  • Hands-on experience with Kafka, Spark, Airflow, or event-driven architectures.
  • Knowledge of REST APIs, microservices, and real-time data integration.
  • Conceptual understanding or hands-on exposure to ML frameworks (Scikit-learn, TensorFlow, PyTorch).
  • Experience contributing to AI-augmented/self-healing pipelines.
  • Bachelor's or master's in computer science, Data Engineering, Information Systems, or related field

Position Level

Associate

Country

India

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 133291385

Similar Jobs