Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
A leading investment firm is seeking an AI Red Team member to enhance the safety of their Generative AI models. This role involves testing security, assessing vulnerabilities, and providing insights on potential harms. Candidates with a Pen Testing background or academic experience in AI are encouraged to apply, offering a chance to make impactful contributions in AI safety.
Interested in improving the safety of Generative AI models?
A large investment firm building its own LLMs is looking to establish an AI Red Team to identify vulnerabilities, biases, and safety concerns in their models.
You will work on testing the security and robustness of these systems, as well as assessing their potential to cause harm to humans.
Ideal candidates may come from a traditional Pen Testing background in Financial Services but have recently transitioned into ML & AI systems.
We are also very open to hearing from Academics and AI Research engineers interested in Red Teaming.