Enable job alerts via email!

AI Safety Research Lead

LawZero

Montreal

On-site

CAD 80,000 - 120,000

Full time

Today
Be an early applicant

Job summary

A non-profit organization focused on AI safety in Montreal seeks an AI Safety Research Lead. The candidate will lead research initiatives aimed at reducing AI risks and fostering innovative solutions. Applicants should have a PhD in Computer Science and over 4 years of research experience, particularly dealing with AI alignment and safety. The role includes shaping the research agenda and working collaboratively within a passionate team. The organization values an inclusive work environment and offers comprehensive health benefits and a minimum of 20 vacation days.

Benefits

Comprehensive health benefits
Minimum of 20 days vacation per year
Retirement savings employer contribution of 4%
Generous flexible benefits

Qualifications

  • Critical thinking about AI safety research agendas.
  • Contributing to high-quality research in AI safety and machine learning.
  • Ability to explain complex ideas to diverse audiences.

Responsibilities

  • Lead key research projects on AI safety.
  • Shape research agenda to reduce catastrophic AI risks.
  • Identify and propose improvements for existing research agenda.
  • Communicate core AI safety problems to team members.
  • Set research priorities for both theoretical and empirical work.

Skills

PhD in Computer Science or a relevant field
4+ years leading AI safety research projects
Experience with ML frameworks like PyTorch or TensorFlow
Strong communication skills
Ability to work collaboratively
Job description
Overview

We are seeking an AI Safety Research Lead to join our team working on a novel AI safety research agenda. In this role, you will contribute to and help drive this agenda, starting from theoretical proposals to the validation of prototypes based on practical safety evaluations.

Key responsibilities
  • Play an active role in the advancement of the Scientist AI research agenda by leading key research projects on short and long term objectives.
  • Help shape the research agenda to maximise impact on reducing catastrophic AI risks (such as loss of control, scheming and deception).
  • Identify safety weaknesses in the existing agenda, propose improvements, and communicate them effectively to other team members.
  • Help set research priorities for both conceptual and empirical work.
  • Communicate an understanding of core AI safety problems and objectives to other team members.
Skills and competencies
  • PhD in Computer Science or a relevant field.
  • 4+ years of experience leading AI safety research projects involving frontier machine learning models, with a focus on alignment, practical evaluations, or theoretical guarantees.
  • The ability to think critically about AI safety research agendas in general and how they address safety problems identified in the literature.
  • Experience with ML frameworks like PyTorch or TensorFlow.
  • Strong communication skills, both written and verbal, with the ability to explain complex ideas to diverse audiences.
  • Track record of contributing to high-quality research in AI safety and machine learning.
  • Ability to work collaboratively in a team environment.
What we offer
  • The opportunity to contribute to a unique mission with a major impact.
  • Comprehensive health benefits.
  • A minimum of 20 days vacation per year upon start.
  • A minimum retirement savings employer contribution of 4%.
  • Generous flexible benefits designed to contribute to your well-being.
  • A team of passionate experts in their field.
  • A collaborative and inclusive work environment with offices in the heart of Little Italy, in the trendy Mile-Ex district, close to public transportation.
About LawZero

LawZero is a non-profit organization committed to advancing research and creating technical solutions that enable safe-by-design AI systems. Its scientific direction is based on new research and methods proposed by Professor Yoshua Bengio, the most cited AI researcher in the world. Based in Montreal, LawZero's research aims to build non-agentic AI that could be used to accelerate scientific discovery, to provide oversight for agentic AI systems, and to advance the understanding of AI risks and how to avoid them. LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing. For more information, visit www.lawzero.org

You belong here

At LawZero, diversity is important to us. We value a work environment that is fair, open and respectful of differences. We welcome applications from highly qualified individuals interested in working towards our mission in a respectful, inclusive and collaborative setting.

Your personal information will be collected and processed by LawZero to evaluate your application for employment in compliance with our Privacy Policy. Under privacy laws in force in your country of residence, you may have several privacy rights, such as to request access to your personal information or to request that your personal information be rectified or erased. Details on how you can exercise your rights can be found in our Privacy Policy.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.