Reporting to the Head, Enterprise Data Governance, the AI Governance Manager will lead PACS’s AI governance initiatives, ensuring responsible, ethical, and compliant deployment of AI systems. This role will serve as the local liaison to Prudential’s Group AI Governance team, AI Governance Working Group (AIWG), overseeing the lifecycle of PACS’s AI systems from registration to risk assessment, approval, monitoring and recertifications.
Governance & Oversight
- Establish, maintain and enforce the PACS AI Governance Standard Operating Procedures to meet regulatory requirements and relevant Group Policies and Standards.
- Perform gap assessments against regulatory requirements and relevant Group Policies and Standards and ensure the gaps are addressed.
- Support relevant working group / committee to provide governance oversight for PACS’ AI activities.
- Oversee the maintenance of the PACS AI Systems Register, ensuring information and documentation is complete, accurate and up to date.
Risk Management & Compliance
- Perform initial reviews and risk assessments of AI systems, including Generative AI models, to ensure compliance with MAS guidelines and Group AI governance policies, and support comprehensive risk evaluations as needed.
- Guide PACS stakeholders in preparing and submitting AI systems for AI Working Group (AIWG) review, impact assessment, and approval.
- Ensure AI systems are used strictly for approved purposes, inline with what’s reported in the AI Systems Register.
- Coordinate if needed, evaluation of AI model performance against established metrics, ensuring systems remain fit for use.
- Promptly notify AIWG of any AI system‑related non‑conformities, incidents, or complaints.
- Monitor the implementation of controls and delivery of conditions prescribed in AIWG approvals prior to production deployment.
- Collaborate with the Business Information Security Officer (BISO) and Data Privacy team to ensure data, information security and privacy requirements are addressed throughout the AI system lifecycle.
Policy & Standards Development
- Establish and maintain local SOPs, guidelines, and processes that reflect both AIWG requirements and local regulatory standards.
- Contribute to the ongoing refinement of AI governance standards and participate in horizon scanning for emerging AI technologies.
- Collaborate with Group AI, Cyber & AI team and other PACS stakeholders to ensure local governance remains aligned with evolving regulatory expectations.
Stakeholder Engagement
- Coordinate with PACS Technology Risk Management (TRM) Team, AI Engineering Team, AIWG, AI Lab, and Group Data CoE on AI‑related initiatives.
- Serve as a facilitator for PACS stakeholders in AIWG meetings, supporting their participation in decision‑making on AI system certifications and policy updates.
WHO WE ARE LOOKING FOR:
Competencies & Personal Traits
- Driven and Self‑initiated individual
- Strong stakeholder management
- Strong communication skills, ability to storyline and engage in focused discussions
- When empowered, not afraid to make decisions
- Operates effectively even when things are not completely certain
Working Experience
- 5+ years of experience in data governance & privacy, AI/ML systems, or risk management.
Professional Qualifications and Technical Knowledge
- Bachelor’s or Master’s degree in Data Science, Computer Science, Information Governance, or related field.
- Certified Artificial Intelligence Governance Professional (AIGP) and / or Lead AI Risk Manager
- Strong understanding of AI ethics, regulatory compliance, and data privacy.
- Familiarity with AI governance tools (e.g., OneTrust) and frameworks including: NIST AI Risk Management Framework (AI RMF), ISO/IEC 42001:2023 – AI Management System Standard, OECD AI Principles, EU Artificial Intelligence Act, IMDA’s Model AI Governance Framework