Enable job alerts via email!

Computer Vision Scientist (Multimodal Sensing)

SPAICE

London

On-site

GBP 50,000 - 80,000

Full time

Yesterday
Be an early applicant

Boost your interview chances

Create a job specific, tailored resume for higher success rate.

Job summary

A leading company in spatial AI is seeking a Computer Vision Scientist to develop algorithms that enhance the perception capabilities of satellites and drones in challenging environments. You will work on groundbreaking research that has direct implications for space and defense missions, fostering innovation and leading projects from conception to deployment.

Benefits

Competitive salary
Equity options
Access to premium gyms and wellness programs
Team retreats & offsites

Qualifications

  • PhD or equivalent industry research experience in Computer Vision or Robotics.
  • Expertise in multimodal perception and sensor fusion with vision, LiDAR, radar.
  • Experience with SLAM and monocular depth estimation on real-time systems.

Responsibilities

  • Lead research and design of Spatial AI perception algorithms.
  • Design perception pipelines for situational awareness and collision detection.
  • Rapid prototyping with ML and robotics engineers, integrating into systems.

Skills

Multimodal perception
Sensor fusion
Deep learning
Semantic scene understanding
Visual place recognition

Education

PhD in Computer Vision, Robotics, or related field

Job description

About SPAICE

SPAICE is building the autonomy operating system that empowers satellites and drones to navigate and interact with the world – regardless of the environment. From GPS-denied zones on Earth to the unexplored frontiers of space, our Spatial AI delivers unprecedented levels of autonomy, resilience, and adaptability.

At SPAICE, you’ll work on real missions alongside leading aerospace and defense contractors, shaping the future of space and autonomous systems. If you're looking for a place where your work has a real, tangible impact – SPAICE is that place.

About the Role

Satellites that detect and avoid threats on their own. Drones that collaborate in GPS‑denied fields. Spacecraft that rendezvous with tumbling targets in orbit. All of these feats rely on rich scene understanding from multiple sensors. That’s where you come in.

As a Computer Vision Scientist (Multimodal Sensing) you’ll lead the research and design of Spatial AI perception algorithms that fuse cameras, LiDAR, radar, event sensors, and more. Your work will unlock reliable detection, mapping, and semantic reasoning in the harshest terrestrial and orbital environments, and will flow directly into flight‑critical autonomy software used by defense customers and commercial operators alike.

What you might work on
  • Space & defense missions. Design perception pipelines for situational awareness, collision detection and avoidance, formation flying, and terrain mapping and surveillance.

  • Design Spatial AI components. Create architectures that combine visual, inertial, and depth cues for robust, multi‑sensor scene understanding.

  • Sensor fusion & neural representations to enable high‑fidelity world models onboard resource‑limited hardware.

  • Semantic understanding & visual place recognition to identify structures, landmarks, and dynamic obstacles in real time.

  • Camera pose estimation, monocular depth estimation, and dense 3D reconstruction both in simulation and on‑hardware testbeds.

  • Rapid prototyping with a team of ML and robotics engineers, followed by integration into flight processors and edge AI accelerators.

What we are looking for
  • PhD in Computer Vision, Robotics, or a related field (or equivalent industry research experience pushing the state of the art).

  • Proven expertise in multimodal perception and sensor fusion (two or more: vision, LiDAR, radar, event cameras, IMU).

  • Publication or product track record in multimodal sensing/neural representations/SLAM for robotics or autonomous navigation in journals and conferences (e.g. CVPR, ICLR, ICRA, ICCV, NeurIPS).

  • Deep knowledge of semantic scene understanding and visual place recognition under extreme lighting and viewpoint changes.

  • Hands‑on experience with camera pose / SLAM and monocular depth estimation on embedded or real‑time systems.

  • R&D leadership – comfort taking vague, high‑risk concepts from ideation through prototype to mission deployment while mentoring junior engineers.

  • Bonus: familiarity with radiation‑tolerant hardware, edge AI acceleration, or flight qualification processes.

Perks & Benefits
  • Competitive salary commensurate with experience and impact.

  • Equity options – you will join us at the ground floor and share in the upside.

  • Well‑being perks – access to premium gyms, climbing centers, and wellness programs.

  • Team retreats & offsites – recent adventures include a half‑marathon in Formentera and a Los Angeles retreat during Oscars weekend.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.