Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
An innovative firm is seeking a Research Engineer on Alignment Science to contribute to groundbreaking research in AI safety. This role focuses on developing techniques to ensure advanced AI systems remain helpful and harmless, even as they surpass human-level intelligence. The ideal candidate will have a blend of scientific and engineering skills, with experience in machine learning and AI safety. Join a dynamic team working on cutting-edge projects that shape the future of AI and its alignment with human values. This position offers the opportunity to collaborate with experts in the field and make a meaningful impact on AI development.
You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful, honest, and harmless, and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science, you'll contribute to exploratory experimental research on AI safety, with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy), often in collaboration with other teams including Interpretability, Fine-Tuning, and the Frontier Red Team.
Current topics of focus include :
Note : Currently, the team has a preference for candidates who are able to be based in the Bay Area. However, we remain open to any candidate who can travel 25% to the Bay Area.
Representative projects :
You may be a good fit if you :
Strong candidates may also :
Candidates need not have :
J-18808-Ljbffr
Research Scientist • San Francisco, CA, US