Senior Security Engineer (AI Secure Design)
The AI Secure Design team is responsible for independent security evaluation of AI technologies, tools, and platforms that our organization may adopt. The AI Secure Design & Evaluation Senior Engineer carries out enterprise security evaluations of AI technologies, tools, and platforms proposed for use. The team acts as the authoritative security review body for AI tooling—ensuring that AI technologies are adopted in a secure, well‑governed, and enterprise‑ready manner before they are made available business or Information Technology teams. This team does not review application implementations or code. Its focus is technology research, security capability assessment, risk analysis, and guidance. This is a technology‑focused security role, not an application security or SDLC role.
Scope of Responsibility
- AI developer tools (e.g., coding assistants, copilots)
- AI platforms and services (LLMs, GenAI APIs, agentic platforms, RAG frameworks)
- Open-source AI tooling proposed for enterprise use
Core Responsibilities
- Continuously research emerging AI technologies, tools, SDKs, platforms, and frameworks relevant to developers.
- Define and maintain the AI Security Evaluation Framework used across the organization.
- Perform deep security capability assessments of AI technologies prior to approval.
- Monitor emerging AI technology trends and security implications.
- Influence enterprise AI adoption strategy through proactive security research.
- Track architectural trends such as:
- LLM hosting models (SaaS vs self‑hosted)
- Agentic platforms and tool‑use patterns
- Retrieval‑Augmented Generation (RAG) ecosystems
- Maintain an internal inventory and taxonomy of AI technologies under evaluation or approved use.
- Evaluate vendors and platforms across areas including:
- Data handling, retention, and isolation
- Prompt and input handling controls
- Output handling and downstream risk exposure
- Model access control and tenancy isolation
- Logging, auditability, and administrative controls
- Vendor security posture (certifications, transparency, maturity)
- Identify design‑level security risks inherent to the technology (not developer misuse).
- Analyze and document technology‑specific risk profiles
- Clearly articulate risk conditions, assumptions, and constraints under which the technology can be safely used.
- Produce formal AI Security Evaluation Reports for each technology, including:
- Executive summary for leadership
- Security architecture overview
- Key risks and mitigations
- Approved, restricted, or disallowed usage scenarios
- Provide recommendations on:
- Whether the technology is suitable for enterprise use
- What classes of use cases are allowed/disallowed
- Required safeguards or governance controls prior to adoption
- Collaborate with Information Security stakeholders to ensure enterprise standards for approved AI technologies are defined, including:
- Acceptable and prohibited usage patterns
- Data categories allowed in AI interactions
- Integration constraints with enterprise systems
- Identity, access, and permission expectations
- Publish clear, developer‑consumable guidance explaining:
- What AI tools are approved
- How they may be used safely
- What developers must not do
- Serve as a trusted advisor to security leadership, architecture boards, and engineering leadership on AI adoption risk.
- Provide early security input during technology selection—not after tools are already embedded.
- Act as the single point of security opinion on AI tool approval decisions.
Required Skills & Qualifications
- Bachelor's or Master's degree in Computer Science, Cybersecurity, or related field.
- 4+ years of experience in Information Security with a focus on application and product security
- Ability to evaluate AI security platforms/tools/software.
- Solid understanding of:
- Cloud and SaaS security models
- Identity and access control concepts
- Data protection and isolation mechanisms
- Familiarity with modern AI system components:
- LLM APIs and hosting models
- Agent frameworks and tool invocation
- RAG pipelines and vector stores
- Understanding of GenAI and LLM‑specific risks, including:
- Prompt injection and indirect prompt injection
- Insecure output handling
- Model abuse and misuse
- Data poisoning and supply‑chain risk
- Ability to translate AI‑specific risks into enterprise security language for decision‑makers.
- Strong research and evaluation mindset
- Ability to produce clear, defensible evaluation reports
- Comfortable presenting trade‑offs and risk‑based recommendations
- Ability to say not suitable with evidence, when required
- Experience evaluating or approving third‑party developer platforms or SaaS tools
- Exposure to AI governance or AI risk management frameworks
- Experience working with product, platform, or architecture review boards
Key Outputs (What This Team Delivers)
- AI Security Evaluation Reports (per tool/platform)
- Approved / Restricted / Disallowed AI Technology List
- Business/IT‑Facing Guidance on AI Tool Usage