Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative firm is seeking a Research Scientist focused on biosecurity risks to lead critical research initiatives. This role involves collaborating with a diverse team to assess and mitigate risks posed by AI models, particularly in the context of biosecurity. You'll design and conduct experiments, synthesize research, and communicate findings to external stakeholders. If you're passionate about the societal impacts of AI and possess a strong background in biological sciences or related fields, this opportunity offers a dynamic environment where your contributions can shape the future of AI safety. Join us in our mission to understand and address the potential risks of advanced AI technologies.
We’re building a team that will research and mitigate extreme risks from future models.
This team will intensively red-team models to test the most significant risks they might be capable of in areas such as biosecurity, cybersecurity risks, or autonomy. We believe that clear demonstrations can significantly advance technical research and mitigations, as well as identify effective policy interventions to promote and incentivize safety.
As part of this team, you will lead research to baseline current models and test whether future frontier capabilities could cause significant harm. Day-to-day, you may decide you need to finetune a model to see whether it becomes superhuman in an eval you’ve designed; whiteboard a threat model with a national security expert; test a new training procedure or how a model uses a tool; or brief government, labs, and other research teams. Our goal is to see the frontier before we get there.
Our CBRN workstream is hiring for a Research Scientist, with an emphasis on biosecurity risks (as outlined in our Responsible Scaling Policy). By nature, this team will be an unusual combination of backgrounds. We are particularly looking for people with experience in these domains:
Do not rule yourself out if you do not fit one of those categories - it’s plausible the people we’re looking for do not fit any of the above! If you think about the most significant upsides and downsides of AI, and you can do good research to get glimpses of what those look like, please consider applying.
Please note: We will only be considering candidates who can be based in the Bay Area for this role. We have a strong preference for candidates who can start ASAP, and ideally by May 2025.