Job Search and Career Advice Platform

Enable job alerts via email!

Principal Research Scientist AI Safety

Faculty AI

Greater London

Hybrid

GBP 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI organization in Greater London seeks a Principal Research Scientist for AI Safety to lead innovative research in safe AI systems. You will drive the research agenda focusing on large language models, mentor a team, and publish high-impact findings in academic journals. Ideal candidates have a strong track record in AI research, excellent communication skills, and deep knowledge of AI safety. This role offers competitive benefits and a hybrid working model.

Benefits

Unlimited Annual Leave Policy
Private healthcare and dental
Enhanced parental leave
Family-Friendly Flexibility
Hybrid Working

Qualifications

  • Proven track record of high-impact AI research through top-tier academic publications.
  • Deep domain knowledge in language models and AI safety.
  • Strong research judgment with experience in AI safety.

Responsibilities

  • Lead the AI safety team’s ambitious research agenda.
  • Conduct cutting-edge AI safety research for large language models.
  • Publish high-impact research findings.

Skills

Machine Learning
Python
Data Science
AI
Natural Language Processing
Data Analysis Skills
Research Experience
Sensors
Drug Discovery
Research & Development
Toxicology Experience
Job description

We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then we've worked with over 350 global customers to transform their performance through human‑centric AI. You can read about our real‑world impact here.

We don’t chase hype cycles. We innovate, build and deploy responsible AI that moves the needle—and we know a thing or two about doing it well. Our depth of technical product and delivery expertise serves clients across government, finance, retail, energy, life sciences and defence.

Our business and reputation are growing fast, and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch‑defining technology—join a company where you’ll be empowered to envision its most powerful applications and to make them happen.

About the Team

Faculty conducts critical red‑teaming and builds evaluations for misuse capabilities in sensitive areas such as CBRN, cybersecurity and international security for several leading frontier model developers and national safety institutes. Our work has been featured in OpenAI’s system card for O1. We also conduct fundamental technical research on mitigation strategies, publishing findings in peer‑reviewed conferences and delivering to national security institutes. Complementing this, we design evaluations for model developers across broader safety‑relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.

About the Role

The Principal Research Scientist for AI Safety will be the driving force behind Faculty’s small, high‑agency research team shaping the future of safe AI systems. You will lead the scientific research agenda for AI safety focusing on large language models and other critical systems. This role involves leading researchers, driving external publications and ensuring alignment with Faculty’s commercial ambition to build trustworthy AI, giving you the opportunity to make a high‑impact contribution in a rapidly evolving critical field.

What you’ll be doing
  • Lead the AI safety team’s ambitious research agenda, setting priorities aligned with long‑term company goals.
  • Conduct and oversee cutting‑edge AI safety research specifically for large language models and safety‑critical AI systems.
  • Publish high‑impact research findings in leading academic conferences and journals.
  • Shape the research agenda by identifying impactful opportunities and balancing scientific and practical priorities.
  • Help build and mentor a growing team of researchers, fostering an innovative and collaborative culture.
  • Collaborate on delivery of evaluations and red‑teaming projects in high‑risk domains like CBRN and cybersecurity.
  • Position Faculty as a thought leader in AI safety through research and strategic stakeholder engagement.
Who we’re looking for
  • You have a proven track record of high‑impact AI research demonstrated through top‑tier academic publications or equivalent experience.
  • You possess deep domain knowledge in language models and the evolving field of AI safety.
  • You exhibit strong research judgment and extensive experience in AI safety, including generating and executing novel research directions.
  • You have the ability to conduct and oversee complex technical research projects with advanced programming skills (Python, standard data‑science stack) to review team work.
  • You bring excellent verbal and written communication skills capable of sharing complex ideas with diverse audiences.
  • You have a deep understanding of the AI safety research landscape and the ability to build connections to secure resources for impactful work.
Our Interview Process
  • Talent Team Screen (30 mins)
  • Experience & Theory interview (45 mins)
  • Research presentation and coding interview (75 mins)
  • Leadership and Principles interview (60 mins)
  • Final stage with our CEO (45 mins)
Our Recruitment Ethos

We aim to grow the best team—not the most similar one. We know that diversity of individuals fosters diversity of thought and strengthens our principle of seeking truth. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.

Some of our standout benefits
  • Unlimited Annual Leave Policy
  • Private healthcare and dental
  • Enhanced parental leave
  • Family‑Friendly Flexibility & Flexible working
  • Sanctus Coaching
  • Hybrid Working (2 days in our Old Street office, London)
Required Experience

Staff IC

Key Skills
  • Machine Learning
  • Python
  • Data Science
  • AI
  • R
  • Research Experience
  • Sensors
  • Drug Discovery
  • Research & Development
  • Natural Language Processing
  • Data Analysis Skills
  • Toxicology Experience

Employment Type: Full‑Time
Experience: years
Vacancy: 1

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.