
Search by job, company or skills
Job Description AI Security Specialist
(GenAI Platform Gemini Enterprise)
About the Role
We are seeking an experienced AI Security Specialist to drive the security, and compliance foundations for the enterprise-wide GenAI Platform built on Google Gemini Enterprise. This role is critical for ensuring safe, compliant, and secure operations of custom LLM agents across the GenAI ecosystem.
You will work closely with architecture, governance, agent engineering, and platform teams to design guardrails, implement GenAI controls, enable DLP, and integrate responsible AI practices across the platform.
Key Responsibilities
1. AI Security, Policy & Guardrail Implementation
Defend the platform against AI-specific threats (prompt injection, model theft).
Conduct continuous security assessments, vulnerability scanning, and red teaming exercises.
Design and implement custom AI security guardrails, including policy injection, prompt shielding, safety filters, and misuse prevention.
Ensure alignment with enterprise GenAI controls and enterprise-wide security policies.
Partner with governance teams to ensure platform-wide compliance and adherence to responsible AI best practices.
2. Data Security & Confidentiality
Implement data classification, DLP redaction, PII handling, and safe content generation controls across Gemini Enterprise.
Ensure secure handling of sensitive datasets used for retrieval, grounding, or contextual workflows.
Work with architects to ensure secure data flow.
Required Skills & Qualifications
Technical Expertise
Strong understanding of LLM architectures, GenAI systems, and security considerations for AI/ML workloads.
Hands-on experience with Google Cloud Platform (GCP) security frameworks, Vertex AI, and preferably Gemini APIs.
Experience mitigating LLM risks: prompt injection, hallucinations, PII leakage, unsafe outputs, and model exploitation.
Familiarity with monitoring tools, logging frameworks, and security automation in cloud environments.
Security & Compliance Skills
Understanding of responsible AI principles, AI risk frameworks, and enterprise compliance standards.
Knowledge of policy management, audit processes, and model evaluation techniques.
Experience with implementing guardrails, content governance, and policy enforcement for AI systems.
Soft Skills
Strong analytical skills and ability to diagnose complex security challenges.
Comfortable collaborating with cross-functional teams in a POD structure.
Excellent communication and documentation capabilities.
Strong ownership and ability to drive security initiatives end-to-end.
Preferred Qualifications
510 years of experience in cybersecurity, cloud security, or AI governance roles.
Experience working on enterprise AI/ML or GenAI platforms.
Certifications: GCP Security Engineer, CISSP, CCSP, or equivalent (preferred but not mandatory).
Experience in security automation, red teaming for LLMs, or AI assurance tooling.
Job ID: 143830665