Job Search and Career Advice Platform

Enable job alerts via email!

Head of Policy and Research

King's College Hospital

Leeds

Hybrid

GBP 80,000 - 100,000

Full time

3 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading healthcare institution in the UK is seeking a Head of Research for AI Safety to shape the scientific agenda around large language models. You'll lead a dynamic research team, focusing on cutting-edge AI safety initiatives and collaborating on high-risk evaluations. The role requires a proven AI research background, strong programming skills, and a collaborative leadership style. This position offers hybrid working arrangements, unlimited annual leave, and a commitment to responsible AI development.

Benefits

Unlimited annual leave
Private healthcare
Family-friendly flexibility
Competitive perks

Qualifications

  • Proven track record of high-impact AI research with top-tier academic publications.
  • Deep domain knowledge in language models and AI safety.

Responsibilities

  • Lead and mentor a small, high-agency research team focused on AI safety.
  • Drive cutting-edge research on large language models and publish findings.
  • Set research priorities aligned with company goals.

Skills

AI research
Python programming
Team leadership
Research judgment
Collaboration in high-risk domains
Job description
Overview

Faculty is seeking a Head of Research of Development for AI Safety to lead the scientific research agenda in large language models and safety-critical AI systems, shaping the future of trustworthy AI in a rapidly evolving field.

Responsibilities
  • Lead and mentor a small, high-agency research team focused on AI safety.
  • Drive cutting-edge research on large language models and other critical systems, publishing high-impact findings in leading conferences and journals.
  • Set research priorities aligned with long-term company goals, identifying impactful opportunities and balancing scientific and practical priorities.
  • Collaborate on delivery of evaluations and red-teaming projects in high-risk domains such as CBRN and cybersecurity.
  • Position Faculty as a thought leader in AI safety through research and strategic stakeholder engagement.
Qualifications
  • Proven track record of high-impact AI research with top-tier academic publications or equivalent experience.
  • Deep domain knowledge in language models and the evolving field of AI safety.
  • Strong research judgment and extensive experience in AI safety, including generating and executing novel research directions.
  • Advanced programming skills (Python and standard data science stack) to oversee and review team work.
  • Passionate leadership style that supports personal and professional development of technical teams.
  • Deep understanding of the AI safety research landscape and ability to build connections to secure resources for impactful work.
Benefits & Culture
  • Hybrid working: 2 days in our Old Street office, London, with flexible remote options.
  • Unlimited annual leave policy and family‑friendly flexibility.
  • Private healthcare, dental, enhanced parental leave and other competitive perks.
  • Opportunity to work in a forward‑thinking team committed to responsible AI and positive legacy through technology.
Apply

If you are excited by this role and believe you bring key strengths, please apply or reach out to our Talent Acquisition team for a confidential chat. Faculty is open to part‑time roles or condensed hours.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.