Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative organization is seeking a researcher to enhance AI safety protocols. You will join a dynamic team focused on conceptual and empirical research, collaborating with leading AI labs to improve control mechanisms for advanced AI systems. This role offers a unique opportunity to contribute to groundbreaking research while leveraging cutting-edge resources, including access to a supercomputer. If you are passionate about AI safety and eager to make a significant impact in the field, this position is perfect for you.
The AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications, including the potential of AI to assist with the development of chemical and biological weapons, how it can be used to carry out cyber-attacks, enable crimes such as fraud, and the possibility of loss of control.
The risks from AI are not sci-fi, they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government, we’re building a unique and innovative organisation to prevent AI’s harms from impeding its potential.
Our team's focus is on AI Control to ensure that even if frontier systems are misaligned, they can be safely used for high-stakes tasks. To achieve this, we are attempting to advance the state of conceptual research into control protocols and corresponding safety cases. Additionally, we will conduct realistic empirical research on mock frontier AI development infrastructure to identify flaws in theoretical approaches and refine them accordingly.
You will be part of a team of 11 researchers, including people with experience in the control agenda and/or experience at frontier labs. Your work will involve a mix of both conceptual and empirical research, with the core goal of making substantial improvements in the robustness of control protocols across major labs, particularly as progress continues towards AGI.
Research partnerships with frontier AI labs will also be a significant part of your role. This will include collaborating on promising research directions (e.g., more realistic empirical experiments in settings that closely mimic lab infrastructure), as well as supporting the development of control-based safety cases.
You will report to Alan Cooney - our team lead. You will also receive research mentorship from our research directors, including Geoffrey Irving and Yarin Gal. From a compute perspective, you will have excellent access to resources from both our research platform team and the UK's Isambard supercomputer (5,000 H100s).
You may be a good fit if you have some of the following skills, experience and attitudes. Please note that you don’t need to meet all of these criteria, and if you're unsure, we encourage you to apply.
We are primarily hiring individuals at all ranges of the following scale (L5-L7). The full range of salaries are available below.