Job Description
Job Summary
We are seeking a highly skilled Quality Engineer with 5–10 years of experience who is passionate about leveraging AI to transform how test automation is built and maintained. The ideal candidate uses AI-powered agentic tools — such as Playwright in agentic mode, Claude, Cursor, or GitHub Copilot — to autonomously generate, execute, and iterate on automation scripts across the SmartFM platform. Rather than writing every test by hand, this engineer directs AI agents to produce and refine testing automation code, dramatically accelerating test coverage across applications, data pipelines, AI/ML systems, APIs. This role is at the cutting edge of modern QA engineering, combining deep automation expertise with hands-on experience orchestrating AI agents as a force multiplier for quality.
Roles And Responsibilities
Core QA & Test Strategy
· Develop and implement end-to-end quality assurance strategies and test plans for applications, data pipelines, data transformations, APIs, and machine learning models within the SmartFM platform.
· Collaborate with the Engineering team throughout the product development lifecycle to ensure alignment with end-user product and quality expectations, as well as adherence to delivery timelines.
· Establish test strategy automated test suites and automate functional testing for existing and new applications.
· Create test Cases, test scripts and test data from user stories and ensure new functionality meets acceptance Criteria.
· Document testing procedures, test results, and data quality metrics, providing clear and actionable insights to cross-functional teams.
AI Agentic Automation
· Use AI agentic tools (e.g., Playwright's AI/MCP mode, Claude, Copilot) to autonomously generate, execute, refine, and maintain automation scripts — treating AI as a co-engineer in the test development workflow.
· Direct and prompt AI agents to produce Playwright test suites covering end-to-end UI flows, API contracts, and integration scenarios across the SmartFM platform.
· Review, validate, and improve AI-generated automation scripts, applying engineering judgment to ensure correctness, maintainability, and coverage quality.
Data, ML and Pipelines
· Perform rigorous data validation and quality checks on data stored in NOSQL databases (MongoDB), including schema validation, data integrity checks, and performance testing of data retrieval.
· Collaborate closely with Data Engineers to ensure the robustness and scalability of data pipelines and to identify and resolve data quality issues at their source.
· Work with Data Scientists to validate the performance, accuracy, fairness, and robustness of Machine Learning, Deep Learning, Agentic Workflows, and LLM-based models. This includes testing model predictions, evaluating metrics, and identifying potential biases.
· Implement automated testing frameworks for data quality, pipeline validation, and model performance monitoring.
· Stay updated with the latest trends and tools in data quality assurance, and MLOps, advocating for continuous improvement in our quality processes.
· Collaborate with product and implementation teams in resolving defects and improve the quality of the product
· Coordinate with cross-functional teams and stakeholders to achieve project alignment.
Required Technical Skills And Experience
· 5-10 years of professional experience in Quality Assurance, with a significant focus on automated application testing, data quality,or ML model testing.
· Strong proficiency in Test Automation tools such as Playwright
· CI/CD integration experience (GitHub Actions, Azure DevOps, Jenkins)
· Strong proficiency in SQL for complex data validation, querying, and analysis across large datasets.
· Proven experience in testing and validating data stored in NOSQL databases (MongoDB) or similar NoSQL databases.
· Understanding of Agentic Workflows and LLMs from a testing perspective, including prompt validation and output quality assessment.
· Experience with API testing and using Playwright or similar tools to automate API validation.
· Solid understanding of data pipeline concepts (ingestion, transformation, validation) and the ability to direct AI agents to generate appropriate quality checks across these layers.
· Proficiency in Python for scripting, test automation, and data validation.
· Familiarity with Machine Learning and Deep Learning concepts, including model evaluation metrics, bias detection, and performance testing.
· Knowledge of cloud platforms (Azure, AWS, or GCP) and their relevance to deploying and running automated test infrastructure.
Additional Qualifications
· Communication: Exceptional communication skills to engage with stakeholders, clients, and senior leadership effectively.
· Collaboration: Strong skills in fostering cross-functional teamwork and aligning goals with stakeholders.
· Ability to comprehend, analyze, and interpret documents effectively
· Highly motivated to acquire new skills, explore emerging technologies in data quality and AI/ML testing, and stay updated on the latest industry best practices.
· Domain knowledge in facility management, IoT, or building automation is a plus.
Education Requirements / Experience
Bachelor's (BE / BTech) / Master's degree (MS/MTech) in Computer Science, Information Systems, Engineering, Statistics, or a related field.