
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading AI organization in Greater London seeks a Principal Research Scientist for AI Safety to lead innovative research in safe AI systems. You will drive the research agenda focusing on large language models, mentor a team, and publish high-impact findings in academic journals. Ideal candidates have a strong track record in AI research, excellent communication skills, and deep knowledge of AI safety. This role offers competitive benefits and a hybrid working model.
We established Faculty in 2014 because we thought that AI would be the most important technology of our time. Since then we've worked with over 350 global customers to transform their performance through human‑centric AI. You can read about our real‑world impact here.
We don’t chase hype cycles. We innovate, build and deploy responsible AI that moves the needle—and we know a thing or two about doing it well. Our depth of technical product and delivery expertise serves clients across government, finance, retail, energy, life sciences and defence.
Our business and reputation are growing fast, and we're always on the lookout for individuals who share our intellectual curiosity and desire to build a positive legacy through technology. AI is an epoch‑defining technology—join a company where you’ll be empowered to envision its most powerful applications and to make them happen.
Faculty conducts critical red‑teaming and builds evaluations for misuse capabilities in sensitive areas such as CBRN, cybersecurity and international security for several leading frontier model developers and national safety institutes. Our work has been featured in OpenAI’s system card for O1. We also conduct fundamental technical research on mitigation strategies, publishing findings in peer‑reviewed conferences and delivering to national security institutes. Complementing this, we design evaluations for model developers across broader safety‑relevant fields, including the societal impacts of increasingly capable frontier models, showcasing our expertise across the safety landscape.
The Principal Research Scientist for AI Safety will be the driving force behind Faculty’s small, high‑agency research team shaping the future of safe AI systems. You will lead the scientific research agenda for AI safety focusing on large language models and other critical systems. This role involves leading researchers, driving external publications and ensuring alignment with Faculty’s commercial ambition to build trustworthy AI, giving you the opportunity to make a high‑impact contribution in a rapidly evolving critical field.
We aim to grow the best team—not the most similar one. We know that diversity of individuals fosters diversity of thought and strengthens our principle of seeking truth. We strongly encourage applications from people of all backgrounds, ethnicities, genders, religions and sexual orientations.
Staff IC
Employment Type: Full‑Time
Experience: years
Vacancy: 1