Hiring Data Engineer-India-Remote
Role Type : Contract
Contract Duration-12 months+
Location : Remote, India.
Working hours--8 hours covering until 12 PM EST
Experience with the following would as an advantage:
- Data handling with AI/ML: Python
- ETL: Azure Data Factory, Dagster
- Data Platforms: Snowflake
- Data Modelling: DBT
- Observability & Cataloging: Anomalo, Alation
Role Requirements (Skills, Experience and Qualifications)
- Experience working in the Asset & Wealth Management industry, with exposure to common investment management data domains and use cases.
- 23+ years of hands-on experience in data engineering, analytics engineering, or data modelling roles, including delivery of modern data platform and transformation initiatives.
- Prior experience in financial services or investment management environments is strongly preferred.
- Strong Snowflake engineering skills, including developing ELT pipelines, implementing data models and transformations, and working with Snowflake-native features such as Streams, Tasks, Snowpipe, Time Travel, and related platform capabilities.
- Familiarity with Medallion-style architectures (e.g. Bronze Silver Gold / Analytics layers) and the ability to align ingestion, transformation logic, and pipeline structure to those layers.
- Hands-on experience using DBT for modular SQL development, transformations, automated testing, documentation, and CI/CD integration.
- Experience designing and building end-to-end data pipelines spanning ingestion, MDM integration, automated data quality validation, metadata and lineage enablement, Snowflake transformation layers, and delivery to downstream platforms within a governed enterprise ecosystem.
- Experience building and orchestrating data pipelines using Azure Data Factory, Dagster or equivalent orchestration tools (e.g. Airflow), including scheduling, dependency management, monitoring, and failure handling.
- Experience with data profiling, reconciliation, and quality validation, including integrating data quality tooling such as Anomalo to operationalize DQ checks across ingestion and transformation layers.
- Strong proficiency in SQL, with experience using Python or similar languages for automation, data processing, or pipeline support.
- Hands-on experience with CI/CD for data pipelines, including version control (Git), automated deployments, environment promotion, and continuous integration of dbt and Snowflake objects.
- Experience working with large, complex datasets sourced from multiple internal and external systems.
- Familiarity with common investment management data providers and platforms (e.g. Bloomberg, BlackRock Aladdin, SimCorp, Charles River, Broadridge, FactSet) is a strong plus.