This is a remote position.
Key Responsibilities:
Test Strategy & Planning: Develop comprehensive test strategies that align with business objectives and
regulatory requirements. De ne the scope, resources, test levels, and techniques required for successful releases.
Testing AI-In uenced Products: Validate the behavior of AI agents and models. This includes verifying
decision-making logic, checking for hallucinations, and ensuring model outputs align with user intent in a non-
deterministic environment.
Database & Backend Verification: Execute complex queries (MySQL, Postgres, MongoDB) to verify data
integrity and consistency. Perform deep-dive API testing and analyze application logs to validate transactional
systems and debug issues.
Risk Analysis & Reporting: Perform risk analysis and report on critical QA metrics (e.g., defect leakage, release
readiness). Present findings to management to support informed, data-driven decision-making.
Process Improvement: Establish and re ne QA standards, including bug reporting formats, test case
templates, and agile testing processes. Make product bug free or minimal and improve product quality.
Process Improvement: Establish and re ne QA standards, including bug reporting formats, test case
templates, and agile testing processes. Make product bug free or minimal and improve product quality.
Stakeholder Collaboration: Act as a bridge between the QA team, developers, and product managers to
ensure alignment on requirements and clear communication of quality risks.
Collaborate Cross-Functionally: Work closely with product managers and domain experts to translate
business requirements into AI solutions. Communicate effectively with team members to iterate on features and
ensure the AI solutions are enterprise-ready, secure, and aligned with user needs.
Requirements
Required Skills & Experience:
Education & Experience: 4+ years of experience in software testing. Bachelor's degree in Computer Science,
Engineering, or a related field.
AI Platform Knowledge: Strong understanding of testing AI/ML products. Experience validating LLM
responses, testing RAG (Retrieval-Augmented Generation) pipelines, and identifying edge cases in AI behaviour
(e.g., prompt injection, hallucinations).
Database Expertise: Proficiency in writing complex SQL queries and handling NoSQL databases (MongoDB) to
validate backend logic and data states.
Advanced Functional Testing: Deep expertise in manual testing techniques beyond UI clicking. Must be
skilled in API Testing (using Postman/cURL), analysing JSON/XML payloads, and inspecting server logs.
Performance & Security Awareness: Ability to identify performance bottlenecks (latency, load issues) and
basic security vulnerabilities (IDOR, data exposure) during manual testing cycles.
SDLC & Independence: Strong understanding of the Software Development Life Cycle (SDLC) and Bug Life
Cycle. proven ability to handle tasks independently and own the quality of a feature from conception to release.
Work Independently: Proven ability to handle tasks independently and own the quality of a feature from
conception to release. Strong verbal and written communication skills to articulate complex defects and strategies
to non-technical stakeholders.
Problem Solving: Demonstrated ability to tackle ambiguous or open-ended problems in a structured way.
Comfortable formulating experiments, evaluating results (using appropriate metrics), and iterating to improve
model performance.