Enable job alerts via email!

AI Red Team Build

Wyatt Partners

City Of London

Hybrid

GBP 60,000 - 80,000

Full time

30+ days ago

Job summary

A large investment firm is seeking a professional to enhance the safety of their Generative AI models. The role involves testing security, identifying weaknesses, and assessing potential harm to humans. Ideal candidates will have a background in Pen Testing, particularly within Financial Services, along with interests in Machine Learning and AI systems. Academic professionals and AI Research engineers are also encouraged to apply.

Qualifications

  • Experience in Pen Testing, particularly in the Financial Services sector.
  • Familiarity with Machine Learning and AI systems.
  • Interest in Red Teaming and AI safety research.

Responsibilities

  • Test the security and robustness of Generative AI models.
  • Identify weaknesses, biases, and safety concerns in the AI systems.
  • Collaborate with other team members to enhance model safety.
Job description

Interested in improving the safety of Generative AI models ?

A large investment firm who are building their own LLM’s are looking to build an AI Red Team alongside who can attack these models & show their weaknesses, bias & safety.

You will work on testing their security & robustness as well as testing the systems for their potential to cause harm to humans.

You will likely be someone who comes from a traditional Pen Testing background in Financial Services, but who has veered into ML & AI systems in recent years.

Also we are very open to hearing from Academics & AI Research engineers interested Red Teaming.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.