Search by job, company or skills

H

Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 3 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation and navigate constant change. Through a combination of strategy, expertise and creativity, we help clients accelerate operational, digital and cultural transformation, enabling the change they need to own their future.

Join our team as the expert you are now and create your future.

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation, and navigate constant change. We're seeking a Data Engineering Manager to join the Data Science & Machine Learning team in our Commercial Digital practice, where you'll lead the design, development, and delivery of data infrastructure that powers intelligent systems across Financial Services, Manufacturing, Energy & Utilities, and other commercial industries.

Requirements

  • 2+ years of hands-on experience building and deploying data pipelines in productionnot just ad-hoc queries and exports. You've built ETL/ELT systems that run reliably, scale, and are maintained over time.
  • Experience leading and developing technical teamsincluding coaching, mentorship, code review, and performance management. Demonstrated ability to build high-performing teams and develop junior talent.
  • Strong SQL and Python programming skills with deep experience in PySpark for distributed data processing. SQL for analytics and data modeling; Python/PySpark for pipeline development and large-scale transformations.
  • Experience building data pipelines that serve AI/ML systems, including feature engineering workflows, vector embeddings for retrieval-augmented generation (RAG), and data quality frameworks that ensure model reproducibility. Familiarity with emerging agent integration standards such as MCP (Model Context Protocol) and A2A (Agent-to-Agent), and the ability to design data services and APIs that can be discovered and consumed by autonomous AI agents.
  • Experience with modern data transformation tools, dbt particularly. You understand modular SQL development, testing, documentation practices, and how to implement these at scale across teams.
  • Experience with cloud data platforms and lakehouse architecturesSnowflake, Databricks, Microsoft Fabric, and familiarity with open table formats (Delta Lake, Apache Iceberg). We're platform-flexible but Microsoft-preferred.
  • Proficiency with workflow orchestration tools such as Apache Airflow, Dagster, Prefect, or Microsoft Data Factory. You understand DAGs, scheduling, dependency management, and how to design reliable orchestration at scale.
  • Solid foundation in data modeling concepts: dimensional modeling, data vault, normalization/denormalization, and understanding of when different approaches are appropriate for different use cases.
  • Excellent communication and client management skillsability to communicate technical concepts to non-technical stakeholders, lead client meetings, and build trusted relationships with executive audiences.
  • Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field (or equivalent practical experience).
  • Willingness to travel approximately 30% to client sites as needed.

Preferences

  • Experience in Financial Services, Manufacturing, or Energy & Utilities industries.
  • Background in building data infrastructure for ML/AI systemsfeature stores (Feast, Databricks Feature Store), training data pipelines, vector databases for RAG/LLM workloads, or model serving architectures.
  • Experience with real-time and streaming data architectures using Kafka, Spark Streaming, Flink, or Azure Event Hubs, including CDC patterns for data synchronization.
  • Familiarity with MCP (Model Context Protocol), A2A (Agent-to-Agent), or similar standards for AI system data integration.
  • Experience with data quality and observability frameworks such as Great Expectations, Soda, Monte Carlo, or dbt tests at enterprise scale.
  • Knowledge of data governance, cataloging, and lineage tools (Unity Catalog, Purview, Alation, or similar).
  • Experience with high-performance Python data tools such as Polars or DuckDB for efficient data processing.
  • Cloud certifications (Snowflake SnowPro, Databricks Data Engineer, Azure Data Engineer, or AWS Data Analytics).
  • Consulting experience or demonstrated ability to work across multiple domains and adapt quickly to new problem spaces.
  • Contributions to open-source data engineering projects or active participation in the dbt/data community.
  • Master's degree or PhD in a technical field.

Position Level

Senior Analyst

Country

India

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 137380203

Similar Jobs

Early Applicant