Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Join a forward-thinking company as a Research Engineer/Scientist focused on AI alignment. This dynamic role involves developing innovative methodologies to ensure AI systems align with human values, particularly in complex and high-stakes scenarios. Collaborate with a passionate team to design scalable solutions, integrate human oversight, and create new evaluation methods. The position offers a hybrid work model, allowing for flexibility while contributing to groundbreaking research in AI safety and trustworthiness. If you're eager to make a meaningful impact in the field of AI, this opportunity is perfect for you.
Join to apply for the Research Engineer / Research Scientist, Alignment role at OpenAI.
The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and aligned with human values, even as they scale in complexity and capability. Our work focuses on developing methodologies for AI to follow human intent across various scenarios, including adversarial and high-stakes situations. We aim to address the most pressing challenges, ensuring our models are prepared for real-world deployment.
As a Research Engineer / Research Scientist on the Alignment team, you will work on ensuring AI systems follow human intent in complex scenarios. Your responsibilities include designing scalable solutions for AI alignment, integrating human oversight, and developing new evaluation methods.
This role is based in San Francisco, CA, with a hybrid work model (3 days in-office) and relocation assistance available.
OpenAI is committed to ensuring that artificial general intelligence benefits all of humanity. We push AI capabilities safely and seek diverse perspectives to shape the future of AI technology.
We are an equal opportunity employer and provide accommodations for applicants with disabilities.