Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative firm is seeking a Cyber Security Researcher to join their pioneering Cyber Evaluations Team. In this role, you will help design research strategies and evaluate AI systems' capabilities in cyber security. Your responsibilities include creating challenges, advising on results interpretation, and collaborating with experts. This position offers a unique opportunity to contribute to groundbreaking research that enhances the safety of AI technologies. If you are passionate about cyber security and eager to make a significant impact in this field, this role is perfect for you.
About the Team
As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.
The AI Security Institute’s Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.
We are building a cross-functional team of cybersecurity researchers, machine learning researchers, research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations. As such to scale up we require all candidates to be able to evaluate frontier AI systems as they are released.
JOB SUMMARY
As a Cyber Security Researcher at AISI your role will range from helping design our overall research strategy and threat model, to working with research and infrastructure engineers to build environments and challenges against which to benchmark the capabilities of AI systems. You may also be involved in coordinating teams of internal and external cyber security experts for open-ended probing exercises to explore the capabilities of AI systems, or with exploring the interactions between narrow cyber automation tools and general purpose AI systems.
Your day-to-day responsibilities could include:
PERSON SPECIFICATION
You will need experience in at least one of the following areas:
This role might be a great fit if:
Core requirements
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit, and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below, salaries comprise of a base salary, technical allowance plus additional benefits as detailed on this page.
This role sits outside of the DDaT pay framework given the scope of this role requires in-depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.