
Enable job alerts via email!
A large investment firm is seeking a professional to enhance the safety of their Generative AI models. The role involves testing security, identifying weaknesses, and assessing potential harm to humans. Ideal candidates will have a background in Pen Testing, particularly within Financial Services, along with interests in Machine Learning and AI systems. Academic professionals and AI Research engineers are also encouraged to apply.
Interested in improving the safety of Generative AI models ?
A large investment firm who are building their own LLM’s are looking to build an AI Red Team alongside who can attack these models & show their weaknesses, bias & safety.
You will work on testing their security & robustness as well as testing the systems for their potential to cause harm to humans.
You will likely be someone who comes from a traditional Pen Testing background in Financial Services, but who has veered into ML & AI systems in recent years.
Also we are very open to hearing from Academics & AI Research engineers interested Red Teaming.