SD Elements, Jira, Confluence, Architecture Repositories, AI and Large Language Model (LLM) Platforms, Telemetry and Observability Tools
Description
GSPANN is hiring a Threat Modeling and AI Security Lead to drive enterprise-wide threat modeling and AI security initiatives. The role focuses on embedding secure design practices, governing AI risk, and enabling safe adoption of modern AI and LLM platforms.
Location: Gurugram / Hyderabad
Role Type: Full Time
Published On: 23 December 2025
Experience: 10 - 12 Years
Share this job
Description
GSPANN is hiring a Threat Modeling and AI Security Lead to drive enterprise-wide threat modeling and AI security initiatives. The role focuses on embedding secure design practices, governing AI risk, and enabling safe adoption of modern AI and LLM platforms.
Role and Responsibilities
- Implement STRIDE, PASTA, and LINDDUN threat modeling frameworks across the organization.
- Develop comprehensive threat models, including architecture diagrams, trust boundaries, data flows, abuse and misuse scenarios, and risk scoring.
- Collaborate with architects, developers, and product teams to define security requirements during design and planning phases.
- Operate SD Elements to manage secure design patterns and requirements, track progress in Jira, and maintain documentation in Confluence.
- Report on threat modeling adoption, coverage, and measurable risk reduction outcomes.
- Intake and triage AI and machine learning use cases based on risk and business impact.
- Perform AI-focused threat modeling, including prompt injection, jailbreaks, data exfiltration, and model poisoning scenarios.
- Define secure patterns for Retrieval-Augmented Generation (RAG) and agent-based systems, covering prompts, isolation, and secrets management.
- Implement AI guardrails such as allow and deny policies, content filtering, rate limiting, token protections, and provenance or watermarking where available.
- Enforce data security practices, including personally identifiable information (PII) minimization, data classification, retention policies, and data masking.
- Conduct model and provider security assessments across internal, open-source, and cloud-based AI platforms.
- Lead AI red teaming and adversarial testing activities.
- Implement MLOps / LLMOps CI/CD security gates, policy-as-code controls, model registry governance, and drift or bias checks where applicable.
- Define monitoring strategies and incident response playbooks for AI-related security incidents.
Skills And Experience
- 1012 years of experience in Application Security, with deep expertise in threat modeling and AI security.
- Strong hands-on experience with SD Elements, Jira, and Confluence.
- Familiarity with AI and Large Language Model (LLM) platforms, including observability and monitoring tools.
- Strong architecture literacy across microservices, APIs, web, and mobile applications.