Search by job, company or skills

K

Senior Data Engineer

2-5 Years

This job is no longer accepting applications

new job description bg glownew job description bg glownew job description bg svg
  • Posted 2 months ago
  • Over 100 applicants

Job Description

We are hiring a Senior Data Engineer to join our team. At Kroll, weare buildingastrong, forward-looking data practicethat integrates artificial intelligence, machine learning, and advanced analytics. You will be designing, building, and integrating data pipelines from diverse sources and collaborate with teams that serve the world s largest financial institutions, law enforcement bodies, and government agencies. This role will partner with and have a dotted-line relationship with the Alternative Asset Advisory service line, supporting their strategic data initiatives and client delivery goals.

The day-to-day responsibilities include but not limited to:

  • Design and build robust, scalable organizational data infrastructure and architecture.
  • Identify and implement process improvements (e. g. , infrastructure redesign, automation of data workflows, performance optimizations)
  • Select appropriate tools, services, and technologies to build resilient pipelines for data ingestion, transformation, and distribution.
  • Develop and manage ELT/ETL pipelines and related applications.
  • Collaborate with global teams to deliver fault-tolerant, high-quality data engineering solutions.
  • Perform monthly code quality audits and peer reviews to ensure consistency, readability, and maintainability across the engineering codebase.

Requirements:

  • Proven experience building and managing ETL/ELT pipelines.
  • Advanced proficiency withAzure,AWS, andDatabricks(with focus on data services)
  • Deep knowledge ofPython,Spark ecosystem(PySpark, Spark SQL) and relational databases
  • Experience buildingREST APIs, Python SDKs, libraries, and Spark-based data services.
  • Hands-on expertise with modern frameworks and tools likeFastAPI,Pydantic,Polars,Pandas,Delta Lake,Docker,Kubernetes
  • Understanding ofLakehouse architecture,Medallion architecture, and data governance
  • Experience with pipeline orchestration tools (e. g. , Airflow, Azure Data Factory)
  • Strong communication skills, ability to work cross-functionally with international teams.
  • Skilled in data profiling, cataloging, and mapping for technical data flows
  • Understanding of API product management principles, including lifecycle strategy, documentation standards, and versioning

Desired Skills:

  • Deep understanding of cloud architecture (compute, storage, networking, security, cost optimization)
  • Experience tuning complex SQL/Spark queries and pipelines for performance.
  • Hands-on experience building Lakehouse solutions usingAzure Databricks,ADLS,PySpark, etc.
  • Familiarity withOOP, asynchronous programming, and batch processing paradigms
  • Experience withCI/CD,Git, and DevOps best practices

More Info

Job Type:
Industry:
Function:
Employment Type:
Open to candidates from:
Indian

About Company

As the leading independent provider of risk and financial advisory solutions, Kroll leverages our unique insights, data and technology to help clients stay ahead of complex demands. Kroll's team of more than 6,500 professionals worldwide continues the firm’s nearly 100-year history of trusted expertise spanning risk, governance, transactions and valuation. Our advanced solutions and intelligence provide clients the foresight they need to create an enduring competitive advantage. At Kroll, our values define who we are and how we partner with clients and communities.

Job ID: 126018641

Similar Jobs