We are looking for an AI Tester / GenAI QA Engineer to test and validate LLM-based and Agentic AI applications. The role focuses on testing AI workflows built using LangChain, LangGraph, and RAG pipelines, ensuring accuracy, reliability, and performance of AI systems.
Key Responsibilities
- Test LLM, GenAI, and Agentic AI applications
- Validate RAG pipelines (retrieval accuracy, response relevance, hallucination checks)
- Test LangChain & LangGraph workflows, including agents, tools, and decision flows
- Write and execute automated test cases using Python & PyTest
- Perform prompt testing, regression testing, and edge-case validation
- Validate API responses, model outputs, and data integrity
- Work closely with AI engineers and product teams to identify issues early
- Document test scenarios, defects, and test reports
Required Skills
- Strong experience in Python
- Hands-on experience with PyTest
- Experience testing GenAI / LLM applications
- Knowledge of LangChain and LangGraph
- Experience with RAG (Retrieval Augmented Generation) testing
- Understanding of API testing
- Good knowledge of test case design & QA processes