Search by job, company or skills

InfoTechWorks

Senior Test Automation Engineer / SDET

7-9 Years
Save
new job description bg glownew job description bg glownew job description bg svg
  • Posted 4 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

About Infotech.Works

Infotech.Works is an AI-first product engineering company helping businesses innovate, optimize, and scale. From custom web and mobile apps to enterprise automation and cloud platforms, we deliver solutions that don't just solve problems - they unlock possibilities. Headquartered in Pune, we partner with global brands and ambitious startups to build technology that shapes the future.

About the Role

We are looking for a highly experienced Senior Test Automation Engineer / SDET to design and build a comprehensive automated test strategy and test suite for a highly dynamic, metadata-driven application platform.

Our platform is a low-code / no-code / pro-code system (Golang / Python), with a web client (TypeScript / React) that consumes a rich API surface. The platform dynamically renders application pages from YAML-based metadata definitions. A view definition, for example, can declare components, layout, styling, behavior, and data interactions, and the runtime interprets that metadata to build the user experience on the fly. In addition, the product includes AI-powered application-building experiences.

Users can leverage AI agents, MCP tools, and related workflows to help create, configure, and evolve applications.

This is not a traditional CRUD web application testing role.

Our system is multi-tenant, context-sensitive, and deeply layered. Request execution depends on URL-derived context such as tenant, site, and workspace boundaries. Runtime behavior varies based on metadata, execution context, loaded data, authorization, and platform state, and AI-assisted workflows. The complexity is high, and we need someone who can bring structure, rigor, and creativity to quality assurance across the entire stack.

The primary goal of this role is to create an automated regression and resilience test suite that gives engineering confidence to move quickly without breaking critical platform behavior. Just as importantly, we want someone who can proactively uncover defects that have not yet been discovered by intentionally stress-testing and breaking the system in controlled ways.

What You Will Do

• Define and implement an end-to-end test automation strategy for a complex metadata-driven platform

• Build automated test coverage across:

o API layers o platform runtime behavior

o metadata interpretation and rendering

o multi-tenant routing and execution context

o site vs workspace behaviors o authentication, authorization, and session flows

o dynamic UI rendering and interaction o AI-assisted workflows and platform integrations

• Design regression suites that validate both expected behavior and known high-risk platform areas

• Create adversarial, exploratory, and negative-path automation intended to expose hidden bugs and edge-case failures

• Develop tests that validate the correctness of context resolution from URL structure and request parameters

• Verify that the platform correctly handles tenant isolation and boundary conditions

• Build test coverage for metadata-defined views, including malformed, incomplete, conflicting, and unexpected YAML configurations

• Validate AI-related user flows, including invocation of agents, tool-driven actions, generated updates, and application of results into the platform

• Verify that AI-powered capabilities behave correctly across tenant, site, and workspace boundaries

• Validate that changes in backend services do not introduce regressions in platform execution, data loading, rendering behavior, or AI-assisted workflows

• Create automated validation for site and workspace execution differences

• Build a maintainable framework for:

o API testing

o UI/browser testing

o integration testing

o contract testing o negative testing

o concurrency and resilience testing

• Partner with engineering leadership, backend engineers, frontend engineers, and AI engineers to identify critical risk areas and define release gates

• Integrate automated tests into CI/CD pipelines so regressions are detected early

• Help establish quality standards, test data strategies, environment strategies, and defect triage practices

• Produce actionable reporting on coverage, quality trends, flaky tests, and platform risk What We Need We need someone who knows how to test systems where:

• behavior is generated dynamically from metadata

• UI structure is not static - request execution depends on runtime context

• multiple tenant and application boundaries exist

• AI capabilities are embedded into user-facing platform workflows

• defects often live in interaction effects between layers, not just in isolated functions You should be comfortable working in ambiguity, digging through complex behavior, and building test systems that are both strategic and practical.

Required Qualifications;

• 7+ years in software quality engineering, test automation, or SDET roles

• Proven experience building automation frameworks from scratch for complex web platforms or platform products • Strong experience testing APIs, web applications, and distributed systems

• Deep understanding of regression testing, integration testing, end-to-end testing, and negative-path testing

• Strong experience with test automation tools for browser and API testing

• Strong understanding of CI/CD integration for automated test suites

• Experience designing test data and environment strategies for complex systems

• Ability to reason about system behavior across backend, API, and frontend layers

• Strong debugging skills and ability to isolate issues in highly layered architectures

• Experience validating multi-tenant and role-based access behavior

• Strong written and verbal communication skills

• Ability to operate independently and drive quality initiatives across teams Preferred

Qualifications

• Experience testing platforms built in Golang, Python, TypeScript, React

• Experience with Playwright, Cypress, Selenium, Postman/Newman, REST Assured, k6, or similar tools

• Experience with contract testing and schema validation

• Experience with metadata-driven, rules-driven, configuration-driven, or dynamically rendered systems

• Experience with AI-enabled product workflows, agent-driven interfaces, toolinvocation flows, or LLM-backed features

• Experience with fuzz testing, property-based testing, chaos testing, or fault injection

• Experience in low-code/no-code/pro-code platform testing

• Experience testing systems with separate development/runtime contexts such as draft/published, workspace/site, or authoring/runtime environments

• Familiarity with containerized test environments and ephemeral environments

• Familiarity with observability tools, logs, traces, and metrics for diagnosing failures

• Experience with security-oriented testing, especially tenant isolation and authorization edge cases

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 146656853