Job Search and Career Advice Platform

Enable job alerts via email!

AI Governance and Testing Manager

PricewaterhouseCoopers

Greater London

Hybrid

GBP 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading professional services firm is seeking an AI Governance and Testing Manager in Greater London. The candidate will design governance frameworks, assess AI risks, and lead automated testing programs. Responsibilities include ensuring AI solutions meet ethical and regulatory standards and coaching junior team members. Applicants should have strong Python skills, experience in AI governance processes, and the ability to communicate technical findings clearly. This role offers flexible work arrangements along with competitive benefits.

Benefits

Empowered flexibility
Private medical cover
24/7 access to a qualified virtual GP
Six volunteering days a year

Qualifications

  • Experience with EU AI Act, NIST AI RMF, or ISO 42001.
  • Experience with GDPR-compliant data usage and retention.
  • Experience in scenario-based evaluations or batch testing.

Responsibilities

  • Design AI governance frameworks for responsible AI adoption.
  • Lead AI use case risk assessments and identify control gaps.
  • Develop AI evaluation methodologies for performance testing.
  • Coach junior colleagues on AI governance and testing.
  • Provide evidence-based recommendations on model risks.

Skills

Strong Python skills
Strong written and verbal communication
Experience designing AI governance frameworks
Experience implementing governance processes
Ability to interpret model behaviour
Experience leading workstreams
Experience designing and executing testing of Gen AI or ML systems
Job description
Line of Service

Assurance

Industry/Sector

Not Applicable

Specialism

Risk

Management Level

Manager

About the role

Our AI & Modelling (AI&M) team is a diverse group of 500+ individuals across the UK and India, with deep experience across the banking, insurance, commercial and government sectors. We are expanding rapidly as clients look to safely adopt AI whilst realising its benefits.

This role sits at the forefront of how organisations build, deploy and oversee advanced AI systems. As an AI Governance and Testing Manager, you will join a team that helps organisations adopt AI responsibly and with confidence. You will work closely with clients to shape practical approaches for evaluating and governing AI models, including generative AI systems and emerging agentic frameworks.

You will contribute to the development of governance frameworks and processes that ensure AI solutions meet ethical, regulatory and operational expectations - including work in model risk, data privacy, data governance, technology and cyber security. This work spans a wide range of AI programmes across industries, providing opportunities to influence decision‑making, guide major AI adoption initiatives and help shape emerging industry best practice.

Alongside this, you will play a key role in developing AI evaluation and monitoring methodologies, strengthening how organisations understand and manage the risks and opportunities created by AI. A central aspect of the role is leading automated testing programmes that assess how AI systems perform in practice and at scale. This includes designing and leading automated testing approaches such as scenario‑based testing, red‑teamin and ongoing monitoring of non‑deterministic AI systems in production.

You will oversee end‑to‑end delivery of AI governance and testing workstreams, acting as a key client contact and ensuring high‑quality, defensible outcomes. You will coach and develop junior colleagues, review and challenge their work, and help build our firm‑wide capability in AI governance and testing.

What your days will look like
  • Design and implement AI governance frameworks, policies, processes and operating models that enable safe, responsible and compliant AI adoption across the development lifecycle.
  • Lead AI use case risk assessments, identifying vulnerabilities and control gaps, and designing proportionate mitigations such as guardrails, governance checkpoints and automated controls to support safe and scalable deployment.
  • Lead the design and delivery of AI evaluation programmes, including automated and scenario‑based testing to assess performance, robustness, safety, reliability and failure modes.
  • Develop and refine AI evaluation methodologies and reusable testing assets, contributing to research that advances the team's approach to generative and agentic AI assessment.
  • Interpret evaluation outputs and provide clear, evidence‑based recommendations on model risks, limitations, controls and deployment readiness.
  • Manage day‑to‑day engagement with client teams and internal stakeholders across PwC, guiding junior colleagues and ensuring high‑quality delivery across governance, testing and advisory projects.
  • Contribute to new propositions, tools and thought leadership that enhance the team's capabilities in AI evaluation, governance and risk management.
  • Maintain an up‑to‑date view of regulatory, technological and industry developments, sharing insights with the team and helping to shape leading‑edge best practice in AI governance and testing.
The role is for you if
  • Experience designing AI governance frameworks aligned to recognised standards or regulations (e.g. EU AI Act, NIST AI RMF, ISO 42001, SS1/23, Responsible AI principles).
  • Experience implementing governance processes across the AI lifecycle, such as use‑case risk assessments, AI model documentation requirements, monitoring standards, or the design of proportionate controls.
  • Experience designing and implementing data privacy and data governance approaches for AI systems, including GDPR‑compliant data usage, retention, lineage and access controls across the AI lifecycle.
  • Experience designing and executing structured testing of Gen AI or ML systems, such as scenario‑based evaluations or batch testing.
  • Ability to interpret model behaviour and testing outputs, translating technical findings into clear governance, risk and compliance implications for senior stakeholders.
  • Strong Python skills, including experience developing ML, GenAI and/or agentic model pipelines.
  • Strong written and verbal communication skills, with experience producing structured evaluation reports, governance documents, technical assurance outputs and client‑ready presentations.
  • Experience leading workstreams or managing junior colleagues, coordinating delivery across multiple projects, and contributing to the development of new methodologies, tools, and client propositions.
What you'll receive from us

We offer a range of benefits including empowered flexibility and a working week split between office, home and client site; private medical cover and 24/7 access to a qualified virtual GP; six volunteering days a year and much more.

Travel Requirements

Up to 20%

Available for Work Visa Sponsorship?

Yes

Government Clearance Required?

No

Optional Skills
  • Accepting Feedback
  • Active Listening
  • Actuarial Science
  • Analytical Thinking
  • Coaching and Feedback
  • Communication
  • Complex Data Analysis
  • Creativity
  • Embracing Change
  • Emotional Regulation
  • Empathy
  • Financial Data Mining
  • Financial Modeling
  • Financial Risk Analysis
  • Financial Risk Management
  • Inclusion
  • Intellectual Curiosity
  • Learning Agility
  • Optimism
  • Presenting Financial Reports
  • Professional Courage
  • Relationship Building
  • Risk Analysis
  • Risk Model Implementation
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.