Location Name: Pune Corporate Office - Mantri
Job Purpose
- Identify and exploit vulnerabilities in AI models and systems through realistic adversarial simulations to prevent exploitation by malicious actors.
- Evaluate models for security weaknesses and risks like Supply chain, AIBOM vulnerabilities across GenAI & Agentic platform.
- Hands on experience on critical attack techniques including prompt injection, jailbreaking, data poisoning, model inversion, and good understanding on OWASP LLM Top 10, OWASP Agentic Top 10, MITRE ATLAS framework.
- Plan and conduct targeted red team engagements across large language models, generative AI applications, and supporting ML infrastructure.
- Produce detailed, actionable reports with clear attack reproductions, impact assessments, and recommended hardening measures.
- Good understanding on Guardrail implementation, Rule engine, Policy configuration and AI Runtime security.
- Safeguard live AI inference environments and deployed models against active runtime threats including model extraction, evasion, backdoor activation, and prompt-based attacks.
- Implement layered runtime defenses such as input/output sanitization, anomaly detection, rate limiting, and content scanning tailored to AI workloads.
- Perform threat modeling and risk analysis tailored to agentic architectures, focusing on risks such as goal misalignment, tool misuse, and privilege escalation.
- Work closely with MLOps, platform, and SRE teams to embed runtime protection practices into deployment pipelines and operational processes.
- Lead incident response for AI-specific security events, performing root-cause analysis, containment, and remediation with minimal service disruption.
- Assess Third Party Partner vulnerabilities and security risk
- Create and maintain specialized tooling, test harnesses, and automation to enable efficient, repeatable adversarial security testing for AI systems.
- Regular cadence with AI Development team for secure design, suggest architectural reviews.
- Connect with AI
Duties And Responsibilities
A-Minimum required Accountabilities for this role
- Engineering / Computer Graduate with 4-6 years of Information / Cyber Security Experience and minimum 1-2 years of hands-on experience on AI Red Teaming, AI Guardrails implementation will be preferred.
B-Additional Accountabilities Pertaining To The Role
- AI securityrelated certifications (Good to Have)
Key Decisions / Dimensions
- Involvement in Activity calendar planning and execution of AI applications
- Preparation of testcases for AI Red Teaming and highlight the risk imposing of the vulnerabilities
Major Challenges
- AI Red Teaming understanding, execution, reporting & timely closure from stakeholders
- Change management and AI sign off with in defined TAT
Required Qualifications And Experience
- Qualifications
- Post-Graduates with relevant security experience of 4-6years (also graduates with experience of 6-8 years may apply)
- Work Experience
- Engineering / Computer Graduate with 4-6 years of Information / Cyber Security Experience
- AI securityrelated certifications (Good to Have)
- Understanding of OWASP LLM Top 10, OWASP Agentic Top 10, MITRE ATLAS framework.
- Common AI-specific threats: Prompt injection (direct & indirect), Model poisoning & data poisoning, Model theft / extraction, Hallucination abuse & unsafe outputs
- Threat modeling for AI systems (LLMs, RAG, agents)
- Strong understanding of application security (OWASP Top 10, API Security Top 10)
- Secure SDLC practices (design reviews, threat modelling, secure coding)