Our client is a leading consumer finance and payments firm operating in India with deep expertise in retail lending, credit cards, and strategic partnerships across digital and offline channels. They are a regulated, innovation-focused organisation prioritizing risk management, scalable tech platforms, and customer-centric lending solutions.
Roles & Responsibilties
- Build and scale high-performing data engineering and ML engineering teams (15+), including hiring, coaching and org design.
- Define and implement MLOps practices to convert experiments into production models with robust deployment, monitoring, and retraining workflows.
- Design and launch a centralized decisioning platform that blends low-code model building, AutoML, rules engines and workflow automation across credit, pricing, collections, fraud, cross-sell and customer lifecycle use cases.
- Lead design and delivery of modern data architectures real-time ingestion, lakehouse, event-driven and streaming systems.
- Ensure comprehensive data lifecycle management: sourcing, lineage, governance, retention and archival to meet regulatory requirements.
- Run DataOps, L1/L2 support and SRE functions to sustain >99.5% availability, with automated testing, observability and self-healing patterns.
- Drive cloud and infrastructure efficiency, vendor relationships (e.g., Databricks), and platform cost optimization.
- Partner closely with product, risk and business leaders to translate platform capabilities into measurable KPIs (loan growth, loss reduction, ticket size, collections performance).
Key Requirements
- 1520 years of progressive experience in data/ML/platform engineering, with 810 years in senior leadership roles.
- Demonstrated experience building large-scale data and ML platforms fintech or regulated-industry experience strongly preferred.
- Strong academic credentials (BTech/MTech/PhD from premier Indian institutes preferred).
- Deep practical knowledge of streaming, lakehouse architectures, event-driven systems and real-time pipelines.
- Hands-on familiarity with MLOps and model lifecycle tools (MLflow, Kubeflow, Airflow, SageMaker, Vertex AI).
- Experience creating decisioning platforms that combine rules, ML, AutoML and workflow automation.
- Skilled in distributed compute and big-data ecosystems (Spark, Kafka, Flink, Hadoop) and major cloud providers (AWS/GCP/Azure).
- Proficient with container orchestration (Kubernetes/Docker) and programming (Python, SQL, Scala/Java); comfortable with ML/DL frameworks.
- Strong understanding of data governance, lineage and regulatory compliance (RBI/GDPR or similar).
- Proven track record in platform reliability, DataOps/SRE practices and controlling cloud spend.
- Excellent leadership, stakeholder management and vendor negotiation capabilities
On Offer
- Be in a high-visibility role with ownership across architecture, operations and business impact
- Competitive compensation for the right individual