Search by job, company or skills

Quantiphi

Associate Lead - Testing (QA + MLOps)

Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 16 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

While technology is the heart of our business, a global and diverse culture is the heart of our success. We love our people and we take pride in catering them to a culture built on transparency, diversity, integrity, learning and growth.

If working in an environment that encourages you to innovate and excel, not just in professional but personal life, interests you- you would enjoy your career with Quantiphi!

Role: Lead/Associate Lead – QA + MLOps & Generative AI

Experience: 10+ years

Location: Mumbai/Bangalore (Hybrid)

Key Responsibilities

AI/ML & GenAI Testing Strategy (AWS Ecosystem)

Define testing approaches for AI systems built on AWS services such as:

  • Amazon SageMaker
  • Amazon Bedrock
  • AWS Lambda
  • Amazon API Gateway
  • Amazon Kinesis
  • AWS Glue
  • Amazon S3
  • Amazon CloudWatch

Design Validation Frameworks Covering

  • Model accuracy & performance validation
  • Data drift & concept drift detection
  • Hallucination detection for LLMs
  • Prompt robustness testing
  • RAG validation (retrieval accuracy + grounding)
  • Bias & fairness validation
  • Safety & toxicity testing

MLOps Quality Engineering (AWS-Centric)

Validate The End-to-end ML Lifecycle Including

  • Data ingestion & feature pipelines
  • Model training & hyperparameter tuning
  • Model versioning & registry
  • Deployment validation
  • Canary & blue/green release validation

Work With AWS-native Services Such As

  • SageMaker Pipelines
  • SageMaker Model Monitor
  • SageMaker Feature Store
  • Bedrock model evaluation workflows
  • CloudWatch-based observability

Implement CI/CD quality gates for ML pipelines integrated with AWS DevOps tools.

GenAI & Agentic AI Testing

Define Quality Engineering Approaches For

  • LLM-based applications using Amazon Bedrock
  • Prompt engineering validation
  • Multi-agent orchestration testing
  • Chatbot & Voice bot conversational testing
  • Intent classification validation
  • Conversation drift & fallback validation
  • API contract validation for LLM integrations

Build Reusable Evaluation Harnesses For

  • BLEU / ROUGE scoring
  • Embedding similarity scoring
  • Response consistency
  • Safety scoring frameworks

Framework & Capability Development

  • Design reusable AI testing accelerators
  • Create AWS-aligned AI test automation frameworks (Python-first)
  • Develop synthetic data generation strategies
  • Establish AI quality scorecards
  • Build an internal AI QA Center of Excellence

Client Engagement & Leadership

  • Lead AI/ML quality strategy workshops
  • Perform AI risk & readiness assessments
  • Present quality architecture to CXOs
  • Drive QA transformation programs
  • Mentor QA teams on AWS-based AI testing
  • Own delivery for AI testing engagements end-to-end

Must Have Skills

Testing Expertise

  • 8–12+ years in Quality Engineering
  • Strong test strategy, automation & governance experience
  • Experience leading QA transformation initiatives
  • Experience building frameworks from scratch AI/ML & GenAI Expertise
  • Deep understanding of ML lifecycle
  • Experience testing ML models (NLP preferred)
  • Hands-on experience validating LLM applications
  • Strong understanding of:
  • Prompt engineering
  • RAG architecture
  • Embeddings
  • Bias & explainability AWS AI/ML Expertise
  • Hands-on experience with:
  • Amazon SageMaker (training, deployment, monitoring)
  • Amazon Bedrock (LLM integration & evaluation)
  • S3-based data pipelines
  • AWS IAM (security validation)
  • CloudWatch monitoring
  • Lambda & API Gateway integrations
  • AWS CI/CD (CodePipeline / CodeBuild preferred)

Understanding Of

  • Infrastructure as Code (Terraform / CloudFormation)
  • Observability in AI systems
  • Cost monitoring for ML workloads

Technical Skills

  • Python (mandatory)
  • Experience with ML libraries (Scikit-learn, TensorFlow, PyTorch)
  • Experience with LLM frameworks (LangChain, etc.)
  • API & automation testing frameworks
  • Git-based workflows
  • Leadership & Communication
  • Strong client-facing communication
  • Experience leading QA teams
  • Ability to create strategy decks & solution proposals
  • Strong stakeholder management

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 147209291