Enable job alerts via email!

Penetration Tester

Barclay Simpson

United Kingdom

Remote

GBP 60,000 - 80,000

Full time

13 days ago

Job summary

A leading cybersecurity firm in the United Kingdom is seeking an experienced Penetration Tester to conduct security assessments on generative AI systems and traditional infrastructures. The role involves performing red teaming, threat modeling, and evaluating vulnerabilities in AI-driven applications. Candidates should have hands-on experience in penetration testing and be able to collaborate with product and AI teams to improve system security. This position offers a dynamic work environment and opportunities for professional growth.

Qualifications

  • Hands-on experience in penetration testing.
  • Familiarity with AI-driven applications and systems.
  • Ability to create detailed reports for stakeholders.

Responsibilities

  • Conduct penetration tests on various systems including AI-integrated ones.
  • Perform red teaming and threat modeling exercises targeting AI models.
  • Evaluate AI systems for vulnerabilities and data leakage.

Skills

Penetration testing
Red teaming
Threat modeling
AI systems security
Vulnerability assessment

Tools

LLM Guardrails
TextAttack
IBM's ART
Job description

Penetration Tester needed with hands-on experience in testing Generative AI systems, LLMs, or AI-driven bots. In this role, you will lead and support security assessments targeting traditional infrastructure and AI-powered systems, including prompt injection testing, model exploitation, adversarial ML, and AI supply chain vulnerabilities. You will collaborate with product, data science, and AI teams to identify and mitigate security weaknesses in novel AI-driven applications.

Key Responsibilities

  • Conduct penetration tests on web applications, APIs, networks, and infrastructure, including AI-integrated systems.
  • Perform red teaming and threat modelling exercises specifically targeting AI models (eg, LLMs, chatbot interfaces, vector databases, and orchestration frameworks like LangChain or AutoGen).
  • Evaluate AI systems for prompt injection vulnerabilities, data leakage, model abuse, prompt chaining issues, and adversarial inputs.
  • Work with development and AI teams to build secure-by-design systems, offering actionable remediation guidance.
  • Conduct testing of model endpoints for issues such as insecure output handling, unauthorized access to functions, or data poisoning.
  • Develop custom testing tools or use existing frameworks (eg, LLM Guardrails, OpenAI evals, or adversarial attack libraries like TextAttack or IBM’s ART).
  • Create detailed reports with findings, impact analysis, and recommendations for technical and non-technical stakeholders.
  • Stay updated on the latest threats, vulnerabilities, and mitigations affecting generative AI systems and machine learning platforms.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.

Similar jobs