Minimum qualifications:
- Bachelor's degree or equivalent practical experience.
- 2 years of experience with security assessments or security design reviews or threat modeling.
- 2 years of experience with security engineering, computer and network security, and security protocols.
- 2 years of coding experience in one or more general purpose languages.
Preferred qualifications:
- Experience in detection, investigations, and incident response.
- Experience writing production code in Python/Go.
- Experience in analyzing systems and identifying security and abuse problems, threat modeling, and remediation.
- Knowledge in security principles.
About The Job
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
Cloud AI Protection (CAIP) mission is to enable the rapid growth of Google Cloud Platform (GCP) and workspace AI businesses by curbing associated safety and security risks. CAIP supports GCP and Workspace AI products throughout their life cycle by advancing safety protection mechanisms in the earliest stages of product design. Specifically, CAIP's service portfolio includes both pre- and post- launch capabilities.
As an AI Security Engineer, you will ensure that our AI products are not only powerful but also safe, secure, and aligned with our AI principles. You will help ensure every AI product is as resilient as it can be by designing and building an industry leading AI agent system to protect Google AI from misuse. Your deep technical skills, understanding of potential security and safety risks, and passion for diving into abuser Tactics, Techniques, and Procedures (TTP) will help the teams to solve the classes of challenging problems in AI safety and misuse at Google scale.
Responsibilities
- Design and build implementation for anti-abuse detection and action systems, including detection and enforcement AI agents, to protect Google Cloud and workspace AI products at scale. Investigate leads and incidents to calibrate AI agents and improve their performance.
- Drive enterprise focused security improvements to Google products and services.
- Respond to AI abuse and misuse incidents, rapidly investigate, communicate, and take actions.Review and develop secure operational practices, and provide security guidance for Engineers and Analysts.
- Communicate with Product and Customer teams on incidents and threat assessment outcomes to identify solutions to mitigate classes of attacks.
- Collaborate with other Google teams to ensure that the issues are understood and solutions are adopted.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .