Job Search and Career Advice Platform

Enable job alerts via email!

Cybersecurity Engineer

AI Security Institute

Greater London

Hybrid

GBP 65,000 - 145,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading cybersecurity research organization in London seeks a cybersecurity engineer to design evaluation benchmarks for AI systems. This role involves building environments to measure AI performance on cybersecurity tasks, ensuring robust infrastructures, and documenting findings. The ideal candidate has strong Python skills and experience in red‑teaming. This is a full-time position with hybrid work flexibility and a salary range of £65,000 to £145,000, plus benefits.

Benefits

Generous annual leave
Employer pension contributions
Flexible working options

Qualifications

  • Experience in penetration testing or CTF design.
  • Familiarity with virtualization technologies.
  • Active involvement in the cybersecurity community.

Responsibilities

  • Design benchmarks for AI systems' cyber capabilities.
  • Build and maintain evaluation infrastructures.
  • Write reports and share findings.

Skills

Strong Python skills
Experience in cybersecurity red-teaming
Strong interest in AI safety

Tools

Network testing tools
Penetration testing frameworks
Reverse engineering tools
Job description

The AI Security Institute is the world’s largest and best-funded team dedicated to understanding the capabilities and impacts of advanced AI and translating that knowledge into action. We operate at the heart of the UK government with direct links to the Prime Minister’s office and collaborate with frontier developers and governments globally.

About the Team

The Cyber and Autonomous Systems Team (CAST) researches and maps the evolving frontier of AI capabilities, focusing on preventing harms from high‑impact cybersecurity capabilities and highly capable autonomous AI systems. Our team blends high‑velocity generalists with technical experts from Meta, Amazon, Palantir, DSTL, and Jane Street.

About the Role

We are looking for a cybersecurity engineer to design environments and challenges that benchmark the cyber capabilities of AI systems. You will build cyber ranges, CTF‑style tasks, and evaluation infrastructure to measure how well frontier AI models perform on real‑world cybersecurity tasks.

Core Responsibilities
  • Evaluation Design & Development (60%)
    • Design cyber ranges and CTF‑style challenges for automatically grading AI system performance on cybersecurity tasks.
    • Build agentic scaffolding to evaluate frontier models, equipping them with tools such as network packet capture utilities, penetration testing frameworks, and reverse engineering/disassembly tools.
    • Design metrics and interpret results of cyber capability evaluations.
  • Infrastructure engineering (30%)
    • Work alongside other engineers to ensure evaluation environments are robust and scalable.
  • Research & Communication (10%)
    • Write reports, research papers, and blog posts to share findings with stakeholders.
    • Keep up to date with related research taking place in other organisations.
    • Contribute to AISI’s broader understanding of AI cyber risks.
Example Projects
  • Onboard and integrate new cyber ranges into our evaluation pipeline.
  • Conduct agent research to improve the cyber capabilities of our agents.
  • Improve grading and scoring methodologies for automated evaluation tasks.
  • Integrate defensive telemetry and simulated users into ranges to increase realism.
  • Collaborate with government partners on joint research publications.
Impact

Your work will directly shape the UK government’s understanding of AI cyber capabilities, inform safety standards for frontier AI systems, and contribute to the global effort to develop rigorous evaluation methodologies. The evaluations you build will help determine how advanced AI systems are assessed before deployment.

What we are looking for
Essential
  • Strong Python skills with experience writing scripts for automation or security tooling.
  • Proven experience in at least one area of cybersecurity red‑teaming:
    • Penetration testing.
    • Cyber range design.
    • Competing in or designing CTFs.
    • Developing automated security testing tools.
    • Bug bounty vulnerability research or exploit discovery and patching.
  • Strong interest in helping improve the safety of AI systems.
Preferred
  • Familiarity with virtualisation technologies such as Proxmox VE and infrastructure‑as‑code approaches to enable reproducible test environments.
  • Ability to communicate the outcomes of cybersecurity research to a range of technical and non‑technical audiences.
  • Familiarity with cybersecurity tools such as network packet capture utilities, penetration testing frameworks, and reverse engineering/disassembly tools.
  • Active in the cybersecurity community with a track record of keeping up to date with new research.
  • Previous experience building or measuring the impact of automation tools on cyber red‑team workflows.
Example backgrounds
  • Penetration tester with 1 year experience; has designed CTF challenges or cyber ranges; strong Python skills; interested in AI safety.
  • Content engineer at a cybersecurity training platform; experienced in building vulnerable machines, CTF challenges, and automated deployment infrastructure.
  • Security researcher with experience in vulnerability research or bug bounties; familiar with penetration testing frameworks and reverse engineering tools; has communicated findings to mixed audiences.
Core requirements
  • This is a full‑time role.
  • You should be able to join us for at least 24 months.
  • You should be able to work from our office in London (Whitehall) for several days each week but we provide flexibility for remote work.
  • We would like candidates to be able to start in Q2 2026.
What We Offer
  • Incredibly talented, mission‑driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Minister’s AI Advisor and leading AI companies.
  • Opportunity to shape the first and best‑resourced public‑interest research team focused on AI security.
Resources & access
  • Pre‑release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security policy, AI research, and adjacent sciences.
Life & family
  • Modern central London office (cafes, food court, gym) or option to work in similar government offices in Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol.
  • Hybrid working flexibility for occasional remote work abroad and stipends for work‑from‑home equipment.
  • At least 25 days annual leave, 8 public holidays, extra team‑wide breaks and 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents, 3 extra paid weeks option for additional unpaid time).
  • On top of your salary we contribute 28.97 % of your base salary to your pension.
  • Discounts and benefits for cycling to work, donations, and retail/gyms.
Salary

Annual salary is benchmarked to role scope and relevant experience, ranging between £65 000 and £145 000. The offer comprises a base salary plus a technical allowance. An additional 28.97 % employer pension contribution is paid on the base salary.

Selection Process
  • Initial interview
  • Technical take‑home test
  • Second interview and review of take‑home test
  • Third interview
  • Final interview with members of the senior team
Security Clearance

Successful candidates must undergo a criminal record check and obtain baseline personnel security standard (BPSS) clearance before appointment. Preference is given to eligibility for counter‑terrorist check (CTC) clearance; higher levels may be required and will be indicated in the advertisement.

Use of AI in Applications

Artificial Intelligence can be a useful tool to support your application. All examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified—presenting ideas and experiences of others or generated by artificial intelligence as your own—applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see our candidate guidance for more information on appropriate and inappropriate use.

Internal Fraud Database

The Internal Fraud function of the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or would have been dismissed had they not resigned. The Cabinet Office bans these individuals for 5 years from further civil service employment. The Cabinet Office then discloses a limited dataset back to the DLUHC. DLUHC carries out pre‑employment checks to detect known fraudsters attempting to reapply for roles.

Nationality Requirements

We may be able to offer roles to applicants from any nationality or background. We encourage you to apply even if you do not meet the standard nationality requirements.

Working for the Civil Service

In accordance with the Civil Service Code, we set out the standards of behaviour expected of civil recruits by merit on the basis of fair and open competition. The Civil Service embraces diversity and promotes equal opportunities. We run a Disability Confident Scheme for candidates with disabilities who meet the minimum selection criteria. We also offer a Redeployment Interview Scheme to civil servants who are at risk of redundancy and who meet the minimum requirements for the advertised vacancy.

Diversity and Inclusion

The Civil Service is committed to attract, retain and invest in talent wherever it is found. To learn more please see the Civil Service People Plan and the Civil Service Diversity and Inclusion Strategy.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.