Interested in improving the safety of Generative AI models?
A large investment firm that is building their own LLMs is looking to establish an AI Red Team to attack these models and demonstrate their weaknesses, bias, and safety.
You will work on testing their security and robustness, as well as evaluating the systems for their potential to cause harm to humans.
We are looking for candidates who likely come from a traditional Pen Testing background in Financial Services but have transitioned into ML and AI systems in recent years. We are also open to hearing from academics and AI research engineers interested in Red Teaming.