Manage risk pertaining to AI frameworks, models in line with Basel and regulatory guidelines.
Design, implement, and continuously enhance policies, standards, and control frameworks to promote the responsible and ethical use of AI across the organization.
Conduct comprehensive AI risk assessments for both internally developed solutions and third-party AI vendors, ensuring alignment with governance requirements and escalating instances of non-compliance as appropriate.
Maintain an enterprise-wide inventory of AI systems, validate associated data sources, and oversee the annual attestation process to ensure completeness and accuracy.
Partner with cross-functional teams and governance bodies to evaluate AI deployments, align with the organization's risk appetite, and support informed decision-making.
Prepare and present governance reports to senior leadership and risk committees, providing insights into compliance status, key risk indicators, and emerging AI-related risks
Develop and deliver AI literacy initiatives, including role-specific training programs tailored to both business and technical audiences, to foster a culture of responsible AI use.