Search by job, company or skills

Q

Associate Lead - Testing (QA + MLOps)

new job description bg glownew job description bg glownew job description bg svg
  • Posted 14 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth.

If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi!

Role: Lead/Associate Lead - QA + MLOps & Generative AI

Experience: 10+ years

Location: Mumbai/Bangalore (Hybrid)

Key Responsibilities:

AI/ML & GenAI Testing Strategy (AWS Ecosystem)

Define testing approaches for AI systems built on AWS services such as:

  • Amazon SageMaker

  • Amazon Bedrock

  • AWS Lambda

  • Amazon API Gateway

  • Amazon Kinesis

  • AWS Glue

  • Amazon S3

  • Amazon CloudWatch

Design validation frameworks covering:

  • Model accuracy & performance validation

  • Data drift & concept drift detection

  • Hallucination detection for LLMs

  • Prompt robustness testing

  • RAG validation (retrieval accuracy + grounding)

  • Bias & fairness validation

  • Safety & toxicity testing

MLOps Quality Engineering (AWS-Centric)

Validate the end-to-end ML lifecycle including:

  • Data ingestion & feature pipelines

  • Model training & hyperparameter tuning

  • Model versioning & registry

  • Deployment validation

  • Canary & blue/green release validation

Work with AWS-native services such as:

  • SageMaker Pipelines

  • SageMaker Model Monitor

  • SageMaker Feature Store

  • Bedrock model evaluation workflows

  • CloudWatch-based observability

Implement CI/CD quality gates for ML pipelines integrated with AWS DevOps tools.

GenAI & Agentic AI Testing

Define quality engineering approaches for:

  • LLM-based applications using Amazon Bedrock

  • Prompt engineering validation

  • Multi-agent orchestration testing

  • Chatbot & Voice bot conversational testing

  • Intent classification validation

  • Conversation drift & fallback validation

  • API contract validation for LLM integrations

Build reusable evaluation harnesses for:

  • BLEU / ROUGE scoring

  • Embedding similarity scoring

  • Response consistency

  • Safety scoring frameworks

Framework & Capability Development

  • Design reusable AI testing accelerators

  • Create AWS-aligned AI test automation frameworks (Python-first)

  • Develop synthetic data generation strategies

  • Establish AI quality scorecards

  • Build an internal AI QA Center of Excellence

Client Engagement & Leadership

  • Lead AI/ML quality strategy workshops

  • Perform AI risk & readiness assessments

  • Present quality architecture to CXOs

  • Drive QA transformation programs

  • Mentor QA teams on AWS-based AI testing

  • Own delivery for AI testing engagements end-to-end

Must have skills:

Testing Expertise

  • 8-12+ years in Quality Engineering

  • Strong test strategy, automation & governance experience

  • Experience leading QA transformation initiatives

  • Experience building frameworks from scratchAI/ML & GenAI Expertise

  • Deep understanding of ML lifecycle

  • Experience testing ML models (NLP preferred)

  • Hands-on experience validating LLM applications

  • Strong understanding of:

  • Prompt engineering

  • RAG architecture

  • Embeddings

  • Bias & explainabilityAWS AI/ML Expertise

  • Hands-on experience with:

  • Amazon SageMaker (training, deployment, monitoring)

  • Amazon Bedrock (LLM integration & evaluation)

  • S3-based data pipelines

  • AWS IAM (security validation)

  • CloudWatch monitoring

  • Lambda & API Gateway integrations

  • AWS CI/CD (CodePipeline / CodeBuild preferred)

Understanding of:

  • Infrastructure as Code (Terraform / CloudFormation)

  • Observability in AI systems

  • Cost monitoring for ML workloads

Technical Skills

  • Python (mandatory)

  • Experience with ML libraries (Scikit-learn, TensorFlow, PyTorch)

  • Experience with LLM frameworks (LangChain, etc.)

  • API & automation testing frameworks

  • Git-based workflows

  • Leadership & Communication

  • Strong client-facing communication

  • Experience leading QA teams

  • Ability to create strategy decks & solution proposals

  • Strong stakeholder management

More Info

Job Type:
Employment Type:

About Company

Quantiphi Founded in 2013, Quantiphi is an award-winning AI-first digital engineering company driven by the desire to reimagine and realize transformational opportunities at the heart of business. We are passionate about our customers and obsessed with problem-solving to make products smarter, customer experiences frictionless, processes autonomous and businesses safer.

Job ID: 144885929