Search by job, company or skills

EXL

Senior Data Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 days ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Key Responsibilities

  • Deployment & Infrastructure Engineering
  • Deploy EXLdata.ai in client-owned AWS/Azure/GCP environments.
  • Configure networking, security, CI/CD, Kubernetes, API gateways, and identity integration.
  • Troubleshoot environment, infra, IAM, and pipeline-related issues.
  • Lead cloud-level optimizations (scaling, cost, performance tuning).
  • Data Engineering & Pipeline Enablement
  • Build, customize, and optimize data pipelines using PySpark, SQL, Databricks, Snowflake, or native hyperscaler data services.
  • Integrate platform agents into client workflows (Data Migration, DQ, DataOps, Annotation).
  • Assist client SMEs in onboarding data sources, targets, and transformations.
  • Value Realization & Client Enablement
  • Serve as the technical anchor for first-of-kind deployments at each client.
  • Ensure clients see measurable value from agent-driven automation (SLA reduction, pipeline acceleration, DQ uplift, migration speed).
  • Provide hands-on support across discovery, configuration, runbooks, and UAT.
  • GenAI Agent Integration
  • Work with product engineering on integrating new GenAI agents into client pipelines.
  • Tailor agent behaviors, triggers, and workflows for domain-specific use cases.
  • Share field insights that shape our agent roadmap.
  • Product Innovation & Feedback Loop
  • Act as the voice of the customer for the EXLdata.ai product team.
  • Identify enhancements, feature gaps, and new accelerator ideas.
  • Participate in internal sprints, tooling improvements, and platform hardening.
  • Managed Service / White-Glove Model
  • Support deployments in EXL-hosted private cloud environments.
  • Serve as the first line of operational excellence for premium clients.
  • Lead operational reliability, monitoring, and support SLAs.

Required Skills & Experience

Technical Expertise

  • 612+ years as a Senior Data Engineer, Forward Deployment Engineer, or Platform Engineer.
  • Strong hands-on experience with at least one hyperscaler (AWS or Azure or GCP).
  • Deep expertise in: - PySpark, SQL, Python
  • Databricks / Snowflake (one mandatory, both preferred)
  • Cloud data services (Kinesis, Glue, Redshift, Synapse, BigQuery, DataProc, etc.)
  • Kubernetes, Docker, CI/CD
  • IAM, VPC, private networking, secrets, API management

Delivery & Client Facing Skills

  • Demonstrated ability to work directly with client engineering teams.
  • Comfortable running design discussions, debugging sessions, and deployment workshops.
  • Strong communication skills; able to simplify technical topics for business audiences.
  • Ability to operate independently with a consulting mindset and ownership mentality.

GenAI & Multi-Agent Curiosity

  • Exposure to LLMs, agent tooling (LangChain, LangGraph, CrewAI, etc.), or willingness to learn fast.
  • Strong interest in how AI can automate data engineering and governance.

Mindset & Attributes

  • Can-do attitude; thrives in ambiguity.
  • Fast learner; bias for action.
  • Team player who collaborates across product, engineering, and client teams.
  • Customer-first orientation and passion for delivering measurable outcomes.

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 138367681

Similar Jobs