Enable job alerts via email!

Computer Vision Scientist (Multimodal Sensing)

SPAICE

London

On-site

GBP 60,000 - 90,000

Full time

10 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

SPAICE is searching for a Computer Vision Scientist specializing in multimodal sensing to help develop advanced Spatial AI perception algorithms for satellites and drones. This role involves leading innovative research to enhance sensor fusion capabilities and improve autonomous navigation in challenging environments. You will engage in impactful missions alongside top aerospace experts, contributing to shaping the future of technology.

Benefits

Competitive salary
Equity options
Access to premium gyms and wellness programs
Team retreats and offsites

Qualifications

  • Expertise in multimodal perception and sensor fusion.
  • Publication or product track record in related fields.
  • Deep knowledge of semantic scene understanding under varying conditions.

Responsibilities

  • Lead research and design of perception algorithms.
  • Create architectures for multi-sensor scene understanding.
  • Prototyping and integration into flight-critical autonomy software.

Skills

Multimodal perception
Sensor fusion
Semantic scene understanding
Camera pose estimation
SLAM

Education

PhD in Computer Vision, Robotics, or related field

Job description

About SPAICE

SPAICE is building the autonomy operating system that empowers satellites and drones to navigate and interact with the world – regardless of the environment. From GPS-denied zones on Earth to the unexplored frontiers of space, our Spatial AI delivers unprecedented levels of autonomy, resilience, and adaptability.

At SPAICE, you’ll work on real missions alongside leading aerospace and defense contractors, shaping the future of space and autonomous systems. If you're looking for a place where your work has a real, tangible impact – SPAICE is that place.

About the Role

Satellites that detect and avoid threats on their own. Drones that collaborate in GPS‑denied fields. Spacecraft that rendezvous with tumbling targets in orbit. All of these feats rely on rich scene understanding from multiple sensors. That’s where you come in.

As a Computer Vision Scientist (Multimodal Sensing) you’ll lead the research and design of Spatial AI perception algorithms that fuse cameras, LiDAR, radar, event sensors, and more. Your work will unlock reliable detection, mapping, and semantic reasoning in the harshest terrestrial and orbital environments, and will flow directly into flight‑critical autonomy software used by defense customers and commercial operators alike.

What you might work on
  • Space & defense missions. Design perception pipelines for situational awareness, collision detection and avoidance, formation flying, and terrain mapping and surveillance.

  • Design Spatial AI components. Create architectures that combine visual, inertial, and depth cues for robust, multi‑sensor scene understanding.

  • Sensor fusion & neural representations to enable high‑fidelity world models onboard resource‑limited hardware.

  • Semantic understanding & visual place recognition to identify structures, landmarks, and dynamic obstacles in real time.

  • Camera pose estimation, monocular depth estimation, and dense 3D reconstruction both in simulation and on‑hardware testbeds.

  • Rapid prototyping with a team of ML and robotics engineers, followed by integration into flight processors and edge AI accelerators.

What we are looking for
  • PhD in Computer Vision, Robotics, or a related field (or equivalent industry research experience pushing the state of the art).

  • Proven expertise in multimodal perception and sensor fusion (two or more: vision, LiDAR, radar, event cameras, IMU).

  • Publication or product track record in multimodal sensing/neural representations/SLAM for robotics or autonomous navigation in journals and conferences (e.g. CVPR, ICLR, ICRA, ICCV, NeurIPS).

  • Deep knowledge of semantic scene understanding and visual place recognition under extreme lighting and viewpoint changes.

  • Hands‑on experience with camera pose / SLAM and monocular depth estimation on embedded or real‑time systems.

  • R&D leadership – comfort taking vague, high‑risk concepts from ideation through prototype to mission deployment while mentoring junior engineers.

  • Bonus: familiarity with radiation‑tolerant hardware, edge AI acceleration, or flight qualification processes.

Perks & Benefits
  • Competitive salary commensurate with experience and impact.

  • Equity options – you will join us at the ground floor and share in the upside.

  • Well‑being perks – access to premium gyms, climbing centers, and wellness programs.

  • Team retreats & offsites – recent adventures include a half‑marathon in Formentera and a Los Angeles retreat during Oscars weekend.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.