Enable job alerts via email!

AI Red Team Build

Wyatt Partners

London

On-site

GBP 55,000 - 85,000

Full time

4 days ago
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading investment firm is seeking an AI Red Team member to enhance the safety of their Generative AI models. This role involves testing security, assessing vulnerabilities, and providing insights on potential harms. Candidates with a Pen Testing background or academic experience in AI are encouraged to apply, offering a chance to make impactful contributions in AI safety.

Qualifications

  • Candidates from a traditional Pen Testing background in Financial Services.
  • Interest in ML & AI systems is essential.
  • Academics and AI Research engineers are also welcomed.

Responsibilities

  • Test the security and robustness of generative AI models.
  • Identify vulnerabilities and biases in AI systems.
  • Assess potential harm to humans from AI models.

Skills

Pen Testing
Machine Learning
AI Safety Assessment

Job description

Interested in improving the safety of Generative AI models?

A large investment firm building its own LLMs is looking to establish an AI Red Team to identify vulnerabilities, biases, and safety concerns in their models.

You will work on testing the security and robustness of these systems, as well as assessing their potential to cause harm to humans.

Ideal candidates may come from a traditional Pen Testing background in Financial Services but have recently transitioned into ML & AI systems.

We are also very open to hearing from Academics and AI Research engineers interested in Red Teaming.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.