Job Search and Career Advice Platform

Enable job alerts via email!

Research Scientist, World Models and Embodied AI

Meta

Greater London

On-site

GBP 60,000 - 90,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology company is seeking a Research Scientist to develop machine perception technology within their Reality Labs Research team. The role involves leading research on 3D computer vision, designing experiments, and contributing to publications. Candidates should hold a PhD in a relevant field, possessing skills in Deep Learning, Reinforcement Learning, and programming languages like C/C++, Python, or Rust. This position is based in Greater London, UK, where innovation in AI and AR/VR technology is a priority.

Qualifications

  • PhD in Computer Vision, Robotics, AI, or equivalent practical experience.
  • Experience communicating research to public audiences.
  • Hands-on experience implementing 3D computer vision algorithms.

Responsibilities

  • Lead research on 3D computer vision and embodied AI.
  • Design experiments related to dynamic scene modeling.
  • Contribute to publications and open-sourcing efforts.

Skills

3D computer vision
Deep Learning
Reinforcement Learning
C/C++
Python
Rust

Education

PhD in Computer Vision, Robotics, AI or related field

Tools

Unix
Physics simulators
Job description
Summary

Meta Reality Labs Research (RL Research) brings together a world‑class R&D team of researchers, developers, and engineers with the shared goal of developing AI and AR/VR technology across the spectrum. The Surreal Spatial AI group is seeking high‑performing Research Scientists to build machine perception technology allowing AI agents, and systems to perceive, understand, and reason about the 3D world around them. The aim of this role is to develop advanced algorithms for active perception and intelligent interaction. You will investigate novel architectures combining World Models, data‑driven control, and Machine Perception for real‑time applications. Leveraging data from egocentric devices (Project Aria) and robotic platforms, your work will span the full stack—from high‑fidelity 3D understanding to the predictive modeling of dynamics and actions—empowering agents to reason about and manipulate their surroundings.

Research Scientist, World Models & Embodied AI Responsibilities
  1. Lead, collaborate, and execute on research that pushes forward the state of the art in 3D computer vision, embodied reasoning, and/or predictive world modeling.

  2. Directly contribute to experiments, including designing experimental details, authoring reusable code, running evaluations, and organizing results.

  3. Work with the team to design practical experiments and prototype systems related to dynamic scene modeling, long‑horizon reasoning, and machine perception.

  4. Contribute to publications and open‑sourcing efforts.

  5. Help identify long‑term ambitious research goals as well as intermediate milestones.

Minimum Qualifications
  1. Currently has or is in the process of obtaining a PhD in the field of Computer Vision, Robotics, AI, Computer Science, a related field, or equivalent practical experience. Degree must be completed prior to joining Meta.

  2. Experience communicating research for public audiences of peers.

  3. Experience with real‑world system building and data collection, including design, coding, and evaluation with modern ML methods.

  4. Research experience involving 3D Computer Vision, Deep Learning, or Reinforcement Learning—specifically related to scene understanding, generative modeling, autonomous agents, or robotic control.

  5. Experience in developing and debugging in C/C++, Python, or Rust.

  6. Must obtain work authorization in the country of employment at the time of hire and maintain ongoing work authorization during employment.

Preferred Qualifications
  1. Hands‑on experience implementing 3D computer vision algorithms and training/evaluating large‑scale ML/AI models.

  2. Familiarity with Reinforcement Learning (RL), VLAs, control theory, or learning‑based planning.

  3. Experience bridging the gap between perception and action (e.g., Active Vision, Embodied AI, Inverse RL, or RLHF).

  4. Experience with physics simulators or synthetic environments (e.g., Habitat, MuJoCo, Isaac Lab).

  5. Experience working in a Unix environment.

  6. Demonstrated research and software engineering experience via an internship, work experience, coding competitions, or widely used contributions in open source repositories (e.g., GitHub).

  7. Proven track record of achieving significant results as demonstrated by grants, fellowships, patents, as well as publications at leading workshops, journals, or conferences such as CVPR, CoRL, ICRA, RSS, NeurIPS, ECCV, ICCV, IROS, or similar.

Industry

Internet

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.