Search by job, company or skills

R

AI Security Architect

new job description bg glownew job description bg glownew job description bg svg
  • Posted 12 hours ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Overview

The AI Security Architect reports directly into the Information Security organization and is responsible for enabling the safe, responsible, and scalable adoption of AI across the enterprise . This role partners closely with all Infosec teams to design and implement effective security protections for AI-enabled systems operating across RealPage's applications, enterprise environments, and SaaS production network.

In addition to shaping AI security architecture and controls, this position plays a key role in accelerating the Infosec organization's own use of AI . The AI Security Architect helps security teams responsibly integrate AI into their daily workflows to improve the speed, accuracy, and effectiveness of detection, investigation, mitigation, and incident response.

This role combines security architecture, applied AI expertise, and hands-on collaboration to ensure AI technologies are adopted safely while empowering Infosec teams to move faster and make better decisions using modern AI platforms.

Responsibilities

  • Design and influence enterprise security architectures for AI systems , including security incident response , LLMs, agentic workflows, MCP servers, AI gateways, and supporting infrastructure across cloud and SaaS environments
  • Partner across all Infosec teams (Security Engineering, AppSec, Red Team, CSIRT, IAM/IGA, AI Governance, Third-Party Risk) to embed AI-specific security controls, patterns, and guardrails into existing programs
  • Lead AI-focused threat modeling covering risks such as prompt injection, insecure tool use, agent autonomy abuse, data leakage, model inversion, inference attacks, supply-chain risk, and non-human identity misuse
  • Define and advise on AI identity and access strategies , including machine identity, non-human identity (NHI), agent identity, credential lifecycle management, and least-privilege access for AI systems
  • Guide secure implementation of AI guardrails , content controls, policy enforcement, and runtime protections across internally built and third-party AI platforms
  • Evaluate and enhance API and MCP security for AI services, including authentication, authorization, abuse prevention, observability, and integration with existing security tooling
  • Perform and support AI red-teaming activities , including agent abuse testing, prompt and tool-chain manipulation, model and integration testing, and adversarial simulation using automated and manual techniques
  • Accelerate Infosec adoption of AI tools , helping teams safely integrate AI into workflows for vulnerability management, detection engineering, incident response, threat analysis, and security operations
  • Educate, mentor, and advise Infosec stakeholders on practical, secure uses of AI platforms and agents to improve speed, quality, and scale of security outcomes
  • Contribute to AI security standards, documentation, metrics, and executive-level reporting to advance the maturity of AI governance and security programs

Qualifications

  • 7+ years of experience as a technologist , including strong hands-on engineering experience
  • 5+ years of information security experience , spanning architecture, application security, security engineering, red teaming, or incident response
  • Direct experience securing AI/LLM systems , including threat modeling, control design, or hands-on implementation
  • Working knowledge of agentic AI development concepts , including tools, orchestration, tool calling, and autonomous workflows
  • Experience with machine and non-human identity , including service identities, workload identities, secrets management, and access governance
  • Understanding of MCP and AI integration patterns , including secure deployment, authentication, authorization, and monitoring considerations
  • Strong foundation in API security , including RESTful services, authentication protocols, abuse prevention, and observability
  • Familiarity with AI guardrails, safety controls, and policy enforcement mechanisms
  • Hands-on exposure to AI red-teaming tools or techniques (e.g., PyRIT , Promptfoo , Protect AI, or custom approaches)
  • Knowledge of cloud and hybrid security architectures (AWS, Azure, SaaS platforms)
  • Solid understanding of authentication and authorization protocols (OAuth2, OIDC, SAML, workload identity, token-based auth)
  • Excellent written and verbal communication skills, with the ability to influence technical and non-technical stakeholders
  • Demonstrated ability to collaborate across teams and educate others

Preferred

  • Hands-on development experience building or securing AI applications using frameworks and platforms such as LangChain , LLM gateways, agents, or workflow automation tools
  • Experience using or integrating AI coding assistants (e.g., Cursor, Copilot, Claude, Codex)
  • Familiarity with CI/CD and automation platforms and integrating security controls into delivery pipelines
  • Contributions to open-source projects , personal GitHub repositories, or research related to security or AI
  • Relevant certifications such as OSCP, OSWE, GPEN, GWAPT , or similar (certifications valued but not required )

Skills

KNOWLEDGE / SKILLS / ABILITIES

  • Strong architectural thinking with the ability to balance security, usability, and speed
  • Comfortable operating in ambiguous, fast-moving AI environments
  • Ability to translate emerging AI risks into practical, actionable guidance
  • High degree of integrity, sound judgment, and professionalism when handling sensitive matters
  • Passion for learning, experimentation, and helping teams safely move faster

More Info

Job Type:
Industry:
Employment Type:

About Company

Job ID: 144967877