Search by job, company or skills

TheHireHub.Ai

Lead Quality Assurance Engineer

new job description bg glownew job description bg glownew job description bg svg
  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Lead QA

Experience Level: 912+ years (Quality Engineering, AI model validation, data correctness, and enterprise release governance)

About the Organization

We are a technology-driven consulting organization delivering enterprise-grade digital solutions to global and regional clients. The firm works at the intersection of business processes and modern technology, helping organizations optimize operations through cloud platforms, ERP systems, and enterprise applications. With a strong focus on innovation, collaboration, and quality delivery, the organization provides a dynamic environment for professionals who are passionate about technology and continuous learning.

Role Overview

The Lead QA / Quality Engineering Lead is responsible for defining and enforcing end-to-end quality standards across AI, Data, and Generative AI solutions. The role operates at the intersection of quality engineering, AI model validation, data correctness, and enterprise release governance, ensuring AI systems are accurate, reliable, safe, and production-ready before exposure to business users.

This role exists to prevent failures by embedding AI-aware quality engineering practices, automation, and rigorous validation throughout the delivery lifecycle.

Key Responsibilities

Quality Strategy & QA Governance

  • Define and own the quality engineering strategy for AI, Data, and GenAI platforms
  • Establish QA standards, test frameworks, and quality gates across data pipelines, AI/ML models, GenAI prompts and workflows, and application services
  • Embed quality checkpoints into CI/CD pipelines and release processes
  • Partner with GenAI & Data Solution Architects to ensure quality is addressed by design

AI & GenAI Testing

  • Define and execute AI-specific testing strategies including hallucination detection, prompt drift and regression testing, model consistency, and adversarial scenarios
  • Design and validate test cases for RAG pipelines, agentic workflows, tool calling, and context/memory handling
  • Validate correctness, explainability, and transparency of AI outputs
  • Perform end-to-end testing across UI (React), backend services (Python), vector databases, and LLM-powered workflows
  • Build and maintain Python-based automation frameworks using PyTest, Selenium/Playwright, and Requests
  • Implement automated API testing with schema validation, database checks, and error-handling scenarios
  • Integrate automated tests into CI/CD pipelines and generate execution and quality reports
  • Troubleshoot failures using application logs, backend logs, and model inference logs and drive root-cause resolution

Data Quality & Validation

  • Define and enforce data quality standards for accuracy, completeness, consistency, and timeliness
  • Validate data pipelines supporting analytics, ML, and GenAI systems
  • Perform data reconciliation, profiling, and anomaly detection using SQL and related tools
  • Partner with data engineering teams to identify and resolve data issues early

Test Automation & Tooling

  • Design and implement automated testing frameworks covering unit, API, integration, and UI testing
  • Ensure automation supports frequent releases without compromising quality
  • Maintain test coverage for AI features including embeddings, vector stores, RAG pipelines, and prompt orchestration

Functional, Integration & UAT

  • Lead functional, integration, system, and regression testing activities
  • Coordinate User Acceptance Testing (UAT) with business stakeholders
  • Ensure UAT scenarios reflect real-world AI and business usage
  • Track defects, prioritize fixes, and validate resolutions prior to release sign-off

Observability, Reliability & Defect Management

  • Implement quality-focused observability to detect AI output regressions, performance degradation, and error trends
  • Analyze production defects and quality incidents
  • Lead root-cause analysis (RCA) and corrective actions
  • Partner with DevOps and Platform teams to support shift-left testing and monitoring

Release Readiness & Compliance

  • Define and enforce release readiness criteria for AI and data platforms
  • Provide quality sign-off for production releases, model and prompt deployments, and feature rollouts
  • Ensure alignment with enterprise governance, security, compliance, and Responsible AI principles

Collaboration & Leadership

  • Collaborate with architects, AI engineers, data teams, platform engineers, product owners, and delivery teams
  • Participate actively in Agile ceremonies and sprint planning
  • Provide technical leadership and mentoring to QA engineers
  • Promote a quality-first culture and scale GenAI-specific testing practices

Qualifications

  • 812+ years of experience in Quality Assurance or Quality Engineering
  • Proven experience leading QA for complex, enterprise-scale platforms
  • Strong experience testing data-intensive, AI/ML, or GenAI-based systems
  • Hands-on experience leading QA teams and automation initiatives

Technical Skills

  • Strong experience with test automation tools such as Selenium, Playwright, and Katalon
  • Proficiency in Python-based automation using PyTest and Requests
  • Strong SQL and NoSQL (MongoDB) skills for data validation and reconciliation
  • Experience testing modern web applications (React preferred)
  • Familiarity with GenAI systems, including RAG pipelines, embeddings, vector databases, prompt flows, and model APIs
  • Experience with cloud platforms (Azure or AWS), CI/CD pipelines, and automated release testing
  • Ability to analyze logs, debug failures, and collaborate effectively across engineering teams
  • Understanding of AI evaluation metrics, hallucination detection, and prompt testing techniques

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 139132529