About WhiteCrow
We are global talent research, insight, and sourcing specialists with offices in the UK, USA, Singapore, Malaysia, Hong Kong, Dubai, and India. Our international reach has helped us to understand and penetrate specialist markets at a global level. In addition to this, our service is also extended to complement our client's in-house talent acquisition teams.
About our client
Our client operates in the consumer financial services space, focusing on making everyday purchases and essential needs more accessible through flexible financing solutions. They support individuals across their financial journey—from obtaining their first line of credit to managing long-term financial flexibility—by enabling more informed and responsible credit decisions.
Our client has built a vast network that connects consumers with a wide range of small and mid-sized businesses, as well as providers in the health and wellness sector. Through this ecosystem, they play a meaningful role in supporting both customer financial well-being and the growth of businesses that form a critical part of the broader economy.
As a AVP – Model Ops Engineer, you will be responsible for...
- Designing, developing, and maintaining robust pipelines to collect, transform, and store data used in model monitoring workflows (e.g., scoring data, performance metrics, outcomes).
- Building scalable data architectures to support real-time and batch monitoring, including data ingestion, enrichment, and retention practices.
- Developing reusable monitoring components (e.g., performance drift detectors, threshold-based alerts, metric repositories) that support various model types and regulatory needs.
- Integrating data pipelines with model lifecycle platforms, MLOps tools, and observability solutions to ensure seamless model performance tracking.
- Partnering with model risk and compliance teams to ensure data lineage, audit trails, and documentation are preserved and accessible for regulatory reviews (e.g., SR 11-7 compliance).
- Collaborating with data scientists, model validators, and product managers to align monitoring data infrastructure with evolving model monitoring requirements.
- Working closely with the model monitoring analytics and strategy monitoring analytics teams within MO&A to ensure the monitoring data infrastructure adapts to changing analytics and monitoring needs.
- Enabling visualization and reporting capabilities through dashboards (e.g., Power BI, Tableau) that summarize model health, stability, and issue alerts.
- Designing and maintaining high-performance data pipelines that ingest, transform, and version datasets for Model and Strategy Monitoring
- Optimizing data storage and compute performance for large-scale monitoring use cases involving high-frequency scoring or model ensembles.
What you already have...
- Bachelor's degree in a quantitative, technical, or data-focused field (e.g., Statistics, Mathematics, Computer Science, Data Science, Engineering) with 5+ years experience or in lieu of degree, and 7+ years of relevant work experience in, data engineering or related roles in the financial services or regulated analytics domain.
- Strong proficiency with data engineering tools and frameworks (e.g., Apache Spark, Airflow, Kafka, dbt, PySpark).
- Proficient in programming languages such as SAS, Python, and SQL for building monitoring pipelines and validation checks.
- Experience with cloud-based data infrastructure (e.g., AWS, Azure, GCP) and data warehousing (e.g., Snowflake, Redshift, BigQuery).
- Familiarity with MLOps practices, model metadata tracking (e.g., MLflow), and monitoring toolkits (e.g., Evidently AI, Why Labs, Prometheus).
- Understanding of model risk governance requirements and the role of data engineering in ensuring compliant model monitoring.
- Ability to work in an agile environment and deliver high-quality, production-grade code in collaboration with DevOps and platform engineering teams.