
Search by job, company or skills
Role: QA Automation Engineer (AI-Native, Playwright + Claude Code)
Location: Remote (Work from Home)
We're hiring a QA Automation Engineer (5 to 10 years of experience) to drive automation-first, AI-assisted testing across our backend platform. You'll spend most of your time building Playwright suites, designing AI agents, and shaping what quality looks like in an agent-driven SDLC — with manual testing reserved only for exploratory and edge cases automation can't yet reach.
Responsibilities
● Build and own automated test suites using Playwright for end-to-end, UI, and API testing across backend services and platform workflows.
● Use Claude Code (agents, skills, MCP servers) as a daily driver — generating test cases, test data, and edge-case scenarios from requirements, PRs, and production signals — shrinking test authoring time from days to hours.
● Design AI agents that own slices of the SDLC — from PR-triggered test design to autonomous regression triage and release sign-off.
● Build custom Claude skills and MCP servers that integrate QA workflows with Jira, CI/CD pipelines, test data stores, ScyllaDB/Kafka inspection tools, and observability platforms.
● Apply AI-driven defect prediction, root cause analysis, and flaky test detection to focus effort where it matters most.
● Embed automated quality gates into CI/CD (GitHub Actions) — including AI-powered code review, test impact analysis, and regression triggers — to shift quality left.
● Validate datasets using SQL across transactional and analytical systems.
Requirements
● 5+ years in QA with a clear bias toward automation and tooling over manual execution, and a track record of measurably reducing manual effort or escape defects.
● Strong hands-on experience with Playwright (or equivalent modern framework — Cypress, Selenium) for UI and API automation, including Page Object Model and TestNG (or similar) framework design.
● Proficiency in Java, JavaScript/TypeScript, or Python, with solid API testing skills using Postman/Insomnia.
● Strong SQL for data validation, plus working knowledge of NoSQL databases (MongoDB, ScyllaDB, or DynamoDB).
● Working understanding of API/backend test lifecycles and exposure to the AWS cloud platform.
● CI/CD experience with GitHub Actions, Jenkins, or equivalent.
● Working fluency in LLM-powered development workflows (Claude Code, Cursor, Copilot, or similar) for test generation, debugging, and review.
● Grasp of AI/ML fundamentals — prompt engineering, context management, and evaluating LLM outputs for correctness and reliability.
● Agile/Scrum experience and clear async communication for remote, cross-functional work.
Nice to Have
● Built MCP servers or custom Claude skills/tools to extend AI agent capabilities.
● Exposure to agentic test orchestration — autonomous agents that plan, execute, and adapt test runs based on code or requirement changes.
● Experience measuring AI-assisted productivity metrics (test creation velocity, coverage uplift, defect escape rate).
Why Join Us
● Build automation for high-scale systems powering global hotel distribution.
● Work at the frontier of AI-native QA — Claude Code, Playwright, and custom MCP agents reimagining testing across the SDLC.
● Help shape the quality engineering culture of a fast-growing, AI-first org.
● Remote-first team that values ownership, clarity, and async collaboration.
Job ID: 145807333