
Search by job, company or skills
About the Role
We are seeking a highly motivated and detail-oriented Quality Assurance (QA) Engineer to join our team. You will play a key role in ensuring the reliability, accuracy, and user experience of our Conversational AI systems. This role involves testing dialogue models, integrations with NLP/LLM-based backends, and end-to-end conversational workflows across multiple platforms (chat,voice, and integrations with enterprise systems).
Location: Bengaluru, India
Key Responsibilities
Test Planning & Strategy
Develop and execute comprehensive test plans for conversational AI features, including NLP pipelines, dialogue flows, integrations, and APIs.
Design functional, regression, performance, and exploratory test cases specific to conversational interfaces.
Conversational Testing
Validate intent recognition, entity extraction, context handling, and multi-turn dialogue management.
Test conversational flows for accuracy, completeness, and user experience. Perform scenario-based and edge-case testing to capture unexpected user inputs.
Automation & Tooling
Build automated test scripts for API endpoints, dialogue flows, and ML model outputs.
Work with frameworks like Postman, Cypress, PyTest, or custom NLP testing tools.
Contribute to CI/CD pipelines with automated regression tests.
Collaboration
Partner with Data Scientists, ML Engineers, and Product Managers to define acceptance criteria for AI-driven features.
Provide feedback on conversational design and model performance.
Collaborate with DevOps on load testing and monitoring production systems.
Quality Advocacy
Ensure conversational AI products meet high standards for accuracy, robustness, fairness, and usability.
Help define and track KPIs (e.g., intent accuracy, response latency, conversation success rate).
Qualifications
3+ years of QA/testing experience, preferably with AI, ML, or NLP products.
Strong understanding of API testing, automated frameworks, and scripting languages (Python, JavaScript, or similar).
Experience testing chat agents, voice agents, or conversational interfaces is a plus.
Familiarity with NLP concepts: intents, entities, context, LLMs, and dialogue management
frameworks (e.g., Rasa, Dialogflow, LLM APIs).
Experience with CI/CD pipelines and version control (Git).
Excellent problem-solving, analytical, and communication skills.
Nice-to-Haves
Exposure to A/B testing and model evaluation.
Experience with cloud platforms (AWS, GCP, Azure).
Familiarity with accessibility and localization testing for multilingual agents..
Interest in AI ethics, fairness, and bias testing.
Job ID: 131147851