Search by job, company or skills

  • Posted a day ago
  • Be among the first 10 applicants
Early Applicant

Job Description

Job Description

In This Role, You Will:

  • Conduct comprehensive technical risk assessments for internal systems, projects, process improvements, AI initiatives, and vendor/product integrations, identifying risks, establishing mitigation plans, and collaborating with cross-functional teams to support effective risk treatment and mitigation.
  • Actively participate in the third-party risk management program by conducting vendor security assessments, focusing on evaluating technical security controls, integration risks, and compliance requirements, including evaluating AI features and risks.
  • Support in enhancing the third- party risk assessment program by maturing assessment approach, monitoring processes, re-evaluation criteria, and adopting a customized, AI-driven vendor security scorecard.
  • Identify, document, and monitor risks, recommend technical treatment plans, and manage follow through closure and reporting.
  • Support certification audits for ISO 27001 and 27701, SOC 2, PCI DSS, TX-RAMP, HIPAA, and ITGC SOX, assisting with evidence collection, remediation tracking, and automated data aggregation workflows.
  • Conduct access control reviews to validate user permissions and enforce least privilege principles.
  • Leverage security automation tools to monitor compliance metrics, detect anomalies, and generate reports for stakeholders.
  • Contribute to the development, refinement, and implementation of security policies, standards, and procedures, emphasizing automation-driven workflows, actionable reporting, and incorporation of AI governance guidelines.
  • Support the organization's AI initiatives by engaging in AI solution development and adoption.
  • Provide daily operational support for compliance initiatives, ensuring timely execution of projects and alignment with organizational security objectives

Responsibilities

Here's What You Need:

  • Bachelor's degree in computer science, Information Technology, Cybersecurity, or a related field (master's preferred).
  • 1-2 years of experience in development and risk engineering and AI security
  • 1 - 2 years of demonstrable experience in security risk management, auditing and compliance, with a focus on supporting security risk assessments and security audit and compliance activities.
  • Good interpersonal communication skills with experience and confidence in collaborating with internal and external partners and stakeholders to develop productive relationships and achieve positive security risk management outcomes.
  • Ability to learn quickly with a willingness to take ownership for new projects and learning new technologies and methodologies.
  • Understanding of risk assessment methodologies and best practices.
  • Ability and willingness to produce and maintain documentation and reports, specifically developing policies, standards, risk assessment reports, and other forms of Security Risk Management Program documentation.
  • Proficiency with productivity and collaboration tools, such as Microsoft Office, Slack, Box, and Zoom.
  • Excellent presentation and written communications skills and a team-focused attitude.

Must have

  • Understanding of AI/ML concepts, including model development, training, and deployment.
  • Familiarity with Generative AI (GenAI) risks, such as prompt injection, data leakage, model bias, and adversarial attacks.
  • Experience with AI guardrails, including input/output sanitization, audit trail logging, and model vulnerability scanning.
  • Knowledge of cloud security frameworks (e.g., AWS, Azure, GCP) for securing AI/ML deployments.
  • Experience integrating AI-powered tools into existing security and compliance workflows.
  • Ability to design scalable, automation-driven processes to reduce manual overhead.

More Info

Job Type:
Industry:
Function:
Employment Type:

About Company

Job ID: 143959573

Similar Jobs