Job Search and Career Advice Platform

Enable job alerts via email!

Manager (Technical Governance Team)

Machine Intelligence Research Institute

Berkeley

On-site

GBP 87,000 - 168,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A nonprofit organization in Berkeley is seeking a team member focused on AI governance. The role includes managing stakeholders, leading projects, and contributing to research on AI safety. Ideal candidates have backgrounds in AI safety and governance, with strong communication skills. Compensation ranges from $120k to $230k based on experience. Join us to help reduce existential risks from smarter-than-human AI.

Benefits

Health, dental, and vision insurance
Flexible vacation approach
Potential visa sponsorship

Qualifications

  • No formal degree requirements, but strong backgrounds in AI Safety are preferred.
  • Familiarity with compute governance and AI policy.
  • Ability to think about AI safety across empirical, theoretical, and conceptual approaches.

Responsibilities

  • Manage external and internal stakeholders to align with MIRI goals.
  • Track projects and motivate team performance.
  • Contribute to research and fieldwork related to AI safety.

Skills

Strong backgrounds in AI Safety
Technical knowledge of AI hardware
Experience with AI policy
Ability to think clearly about AI safety
Communication skills with internal and external stakeholders
Job description
About MIRI

The Machine Intelligence Research Institute (MIRI) is a nonprofit based in Berkeley, California, focused on reducing existential risks from the transition to smarter-than-human AI. We have shifted our focus towards communication and AI governance. See our strategy update post for details.

About the Technical Governance Team

We are building a dynamic team that can quickly produce a large range of research outputs for the technical governance space. You may fill out the form or contact techgov@intelligence.org. We focus on researching and designing technical aspects of regulations and policy that could lead to safe AI. The team works on:

  • Inputs into regulations, requests for comments by policy bodies (e.g. NIST/US AISI, EU, UN)

  • Technical research to improve international coordination

  • Limitations of current AI safety proposals and policies

  • Communicating with and consulting for policymakers and governance organizations

Our previous publications are available on our website if you would like to read them. See our research agenda for the kinds of future projects we are excited about.

About the Role

We are primarily hiring for researchers but are also interested in hiring a manager for the team. In this role, you would manage a team working on the above areas, and have the opportunity to work on these areas directly. This role could involve the following, but we are open to candidates who want to focus on a subset of these responsibilities.

  • External stakeholder management, e.g., build and maintain relationships with policymakers and AI company employees (the target audience for much of our work)

  • Internal stakeholder management, e.g., interface with the rest of MIRI and ensure our work is consistent with broader MIRI goals, pre-publication review of the team’s outputs

  • Project management, e.g., track existing projects, motivate good work toward deadlines

  • People management, e.g., run future hiring rounds, fellowships

  • Bonus: Research contributions, e.g., contributing to object level work

  • In the above work, maintain focus on what is needed for solutions to scale to smarter-than-human intelligence and conduct research on new challenges that may emerge at that stage

Most of the day-to-day work of the team is a combination of reading, writing, and meetings. Some example activities could include:

  • Threat modeling, analyzing how AI systems could cause large-scale harm, and actions to prevent this

  • Responding to a government agency’s Request for Comment

  • Learning about risk management practices in other industries and applying these to AI

  • Designing and implementing evaluations of AI models, for example to demonstrate failure modes with current policy

  • Preparing and presenting informative briefings to policymakers

  • Reading a government or AI developer’s policy document and writing a report on its limitations

  • Designing new AI policies and standards that address the limitations of current approaches

Who We’re Looking For

There are no formal degree requirements to work on the team, however we are especially excited about applicants with strong backgrounds in AI Safety and familiarity with one or more of the following:

  • Compute governance. Technical knowledge of AI hardware / chips manufacturing and related governance proposals.

  • Policy (including AI policy). Experience could involve writing legislation or white papers, engaging with policymakers or other AI policy and governance research.

  • Strong AI Safety generalist. Demonstrated ability to think clearly about AI safety across empirical, theoretical, and conceptual approaches.

  • Bonus: Research or engineering focused on frontier AI models or the AI tech stack, including model evaluations, benchmarking AI hardware, scaling law experiments, and related empirical work.

We are also excited about candidates who are particularly strong in the following areas:

  • Agency – You get things done autonomously, focus on solving problems, and engage actively with the team.

  • Conscientiousness – You are diligent, detail-oriented, and reliable.

  • Comfort learning on the job – You quickly acquire new skills and adapt to underspecified tasks.

  • Generative thinking – You generate and iterate on new ideas and improve on others’ ideas.

  • Communication (Internal) – You keep teammates informed and participate in meetings.

  • Communication (External) – You communicate effectively to external stakeholders and can present research and ideas clearly.

In addition, we are looking for candidates who:

  • Are broadly aligned with MIRI’s values and goals, and with the Technical Governance Team’s research directions (e.g., those described in our research agenda).

  • Are passionate about MIRI’s mission and excited to support our work in reducing existential risks from AI.

Logistics
  • Application deadline – No current deadline; applications are evaluated as they come in.

  • Location – In-office (in Berkeley, CA)

  • Compensation – $120–230k. The range reflects varying experience and skills.

  • Benefits – MIRI offers health, dental, and vision insurance; a flexible approach to vacation; visa sponsorship may be available for promising candidates.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.