About the Company:
ProductSquads was founded with a bold mission: to engineer capital efficiency through autonomous AI agents, exceptional engineering, and real-time decision intelligence. We're building an AI-native platform that redefines how software teams deliver valuewhether through code written by humans, agents, or both. Our stack combines agentic AI systems, ML pipelines, and high-performance engineering workflows. This is your chance to build not just models, but systems that think, decide, and act. We're developing AI fabric tools, domain-intelligent agents, and real-time decision systems to power the next generation of product delivery.
About the Role:
We are looking for a QA professional with experience in both manual and automation testing, who is also proficient in using AI tools to enhance testing efficiency. The role involves validating AI-driven features and ensuring high-quality software delivery through intelligent test practices.
Responsibilities:
- LLM Application Testing: Design and execute comprehensive test strategies for applications powered by Large Language Models, focusing on functionality, user experience, and response accuracy.
- Conversational Flow Testing: Test chat interfaces, dialogue systems, and conversational AI applications to ensure proper flow, context retention, and appropriate responses.
- Input/Output Validation: Validate LLM application responses for different user inputs, edge cases, and ensure consistent application behavior across various scenarios.
- Prompt and Response Testing: Test various user prompts and validate that the application handles different types of queries appropriately and returns expected functionality.
- End-to-End Application Testing: Design, develop, and execute detailed test cases for AI application functionality, including user interfaces, APIs, and integration points.
- Edge Case and Boundary Testing: Create and execute tests for edge cases, unusual user inputs, and boundary conditions to ensure the LLM application handles unexpected scenarios gracefully.
- Integration Testing: Test the integration between LLM services and application components, including APIs, databases, and third-party services.
- User Experience Testing: Ensure LLM-powered features provide intuitive and helpful user experiences, testing response times and application usability.
- Cross-functional Collaboration: Work closely with Backend Developers, Frontend Developers, Product teams, and stakeholders to understand LLM application requirements and ensure comprehensive test coverage.
- Documentation and Reporting: Document test cases for LLM application features.
Qualifications:
- Education: Bachelor's degree in Computer Science, Engineering, Data Science, or related technical field.
- Experience: 35 years of hands-on experience in QA testing with experience or strong interest in testing applications that integrate with AI/LLM services.
- LLM Application Knowledge: Basic understanding of how Large Language Models work within applications, API integrations, and conversational AI user interfaces.
- Testing Expertise: Proven experience in both Manual and Automation Testing with deep understanding of SDLC, STLC, and AI-specific testing methodologies.
Required Skills
- Proficiency in test automation tools and frameworks
- Strong API testing experience (Postman, REST Assured, etc.)
- Understanding of web technologies (HTML, CSS, JavaScript basics)
- Knowledge of databases and SQL for data validation
- Experience with cloud platforms and basic understanding of microservices architecture
Preferred Skills:
- Analytical Mindset: Excellent troubleshooting and analytical skills with ability to think critically about application behavior and identify potential issues in LLM-powered features.
- Detail-Oriented: Meticulous attention to detail when testing conversational flows, user interactions, and application responses.
- Communication: Strong written and verbal communication skills to effectively test conversational AI and collaborate with development teams.
- Adaptability: Ability to quickly learn new technologies related to LLM applications and adapt testing strategies for evolving AI-powered features.
- Problem-Solving: Proactive approach to identifying edge cases and potential user experience issues in LLM applications.