We are seeking an experienced SDET Automation with 4 to 8 years of hands-on experience in building scalable automation solutions and a strong interest or experience in Generative AIdriven testing approaches. The ideal candidate combines solid engineering fundamentals with modern QA practices, including AI-assisted test design, validation, and quality engineering in Agile environments.
Key Responsibilities
- Design, develop, and maintain robust automation frameworks for Web, API, and Mobile applications
- Build and execute end-to-end automated test suites covering functional, regression, and integration scenarios
- Apply GenAI techniques to improve test coverage, test data generation, and exploratory testing
- Validate AI/ML-powered features, including GenAI outputs for accuracy, relevance, bias, and consistency
- Collaborate with developers, product, data, and DevOps teams to ensure quality across SDLC
- Automate API and backend testing including validation of data pipelines and services
- Integrate automation with CI/CD pipelines and ensure reliable test execution
- Analyze failures, identify root causes, and drive defect resolution (with GenAI is a plus here)
- Contribute to test strategy, quality metrics, and release readiness
Required Skills & Qualifications
Experience
- 68 years of experience in Automation Testing / SDET roles
- Proven experience building or enhancing automation frameworks from scratch
Programming & Automation
- Strong coding skills in Java / Python / JavaScript
- Hands-on experience with Selenium / Playwright / Cypress
- Experience with API automation using RestAssured / Postman / Karate
- Strong understanding of OOP, data structures, and design patterns
Generative AI / AI Testing
- Working knowledge of Generative AI concepts (LLMs, prompts, embeddings, hallucinations, evaluation metrics)
- Experience with Claude-Code is advantage
- Experience testing GenAI-enabled features such as chatbots, summarization, recommendation, or content generation
- Hands-on exposure to prompt engineering, prompt versioning, and prompt evaluation
- Experience validating LLM outputs for correctness, relevance, safety, bias, and determinism
- Familiarity with LLM APIs (e.g., OpenAI, Azure OpenAI, or similar)
- Understanding of AI testing strategies including golden datasets, synthetic data generation, and regression testing for AI models
Testing & Tools
- Experience with TestNG / JUnit / PyTest / Mocha
- Knowledge of BDD frameworks (Cucumber / SpecFlow good to have)
- Database testing using SQL
- Experience with Git and version control systems
DevOps & Environment
- Experience integrating automation with CI/CD tools (Jenkins, GitHub Actions, GitLab CI)
- Exposure to Docker and cloud platforms (AWS / GCP / Azure)
Good to Have
- Experience using AI-assisted testing tools or test generation frameworks
- Knowledge of model evaluation techniques (BLEU, ROUGE, semantic similarity, human-in-the-loop validation)
- Performance testing experience (JMeter, Gatling)
- Experience testing microservices and distributed systems
- Exposure to security testing for AI applications (prompt injection, data leakage)