Enable job alerts via email!

Computer Vision Engineer

SPAICE

London

On-site

GBP 45,000 - 75,000

Full time

10 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

SPAICE is seeking a Computer Vision Engineer specializing in multimodal sensing. You'll develop perception algorithms for advanced satellites and drones. Ideal candidates have an M.S. in a relevant field and strong skills in C++ and Python, contributing to significant missions in space and defense.

Benefits

Competitive salary
Equity options
Access to premium gyms and wellness programs
Team retreats & offsites

Qualifications

  • Expertise in multimodal perception & sensor fusion, semantic scene understanding, and more.
  • Strong software engineering skills in C++ and Python.
  • Demonstrated ability to deliver production-quality code.

Responsibilities

  • Implement perception pipelines for situational awareness, collision detection, and more.
  • Build and integrate perception stacks for Spatial AI architectures.
  • Collaborate with a team delivering high-performance code on real missions.

Skills

Multimodal perception
Sensor fusion
Semantic scene understanding
SLAM / camera-pose estimation
Monocular depth estimation
Visual place recognition
C++
Python

Education

M.S. in Computer Vision/Robotics

Tools

CUDA
TensorRT
Vulkan

Job description

About SPAICE

SPAICE is building the autonomy operating system that empowers satellites and drones to navigate and interact with the world – regardless of the environment. From GPS-denied zones on Earth to the unexplored frontiers of space, our Spatial AI delivers unprecedented levels of autonomy, resilience, and adaptability.

At SPAICE, you’ll work on real missions alongside leading aerospace and defense contractors, shaping the future of space and autonomous systems. If you're looking for a place where your work has a real, tangible impact – SPAICE is that place.

About the Role

Satellites that detect & avoid threats on their own. Drones that collaborate in GPS‑denied fields. Spacecraft that rendezvous with tumbling targets in orbit. All of these feats demand robust scene understanding from multiple sensors. That’s where you come in.

As a Computer Vision Engineer (Multimodal Sensing) you’ll implement and refine perception algorithms that fuse cameras, LiDAR, radar, event sensors, and beyond. Working shoulder‑to‑shoulder with a top‑tier team of CV scientists, you’ll translate cutting‑edge research into flight‑ready code for space and defense missions.

What you might work on
  • Implement perception pipelines for situational awareness, collision detection & avoidance, formation flying, surveillance and terrain mapping across satellites and drones operating in GPS‑denied, dynamic environments.

  • Build the Perception stack of our Spatial AI architectures, fusing visual, inertial, and depth cues for robust, multi‑sensor scene understanding.

  • Integrate sensor fusion & neural representations to create dense onboard world models that run in real time on resource‑constrained hardware.

  • Deploy semantic scene understanding, visual place recognition, pose estimation, and monocular depth estimation on embedded or edge‑AI processors.

  • Collaborate with a top‑tier team of CV scientists and cross‑disciplinary engineers, delivering well‑tested, high‑performance code into flight processors, Hardware‑in‑the‑Loop (HIL) and Software‑in‑the‑Loop (SIL) setups, and real missions.

What we are looking for
  • M.S. in Computer Vision/Robotics, or a related field.

  • Expertise in at least two of the following: multimodal perception & sensor fusion, neural representations, semantic scene understanding, SLAM / camera‑pose estimation, monocular depth estimation, visual place recognition.

  • Strong software engineering skills in C++ and Python, including performance‑critical CV/ML code on Linux or embedded platforms.

  • Familiarity with GPU or edge‑AI acceleration (CUDA, TensorRT, Vulkan, or similar).

  • Demonstrated ability to deliver production‑quality, well‑tested code in collaborative, fast‑moving environments.

Preferred Qualifications
  • Familiarity with GPU or edge‑AI acceleration (CUDA, TensorRT, Vulkan, or similar).

  • Experience deploying perception pipelines on resource‑constrained hardware.

  • Publications in multimodal sensing/neural representations/SLAM for robotics or autonomous navigation in journals and conferences (e.g. CVPR, ICRA, ECCV/ICCV, NeurIPS).

Perks & Benefits
  • Competitive salary commensurate with experience and impact.

  • Equity options – you will join us at the ground floor and share in the upside.

  • Well‑being perks – access to premium gyms, climbing centers, and wellness programs.

  • Team retreats & offsites – recent adventures include a half‑marathon in Formentera and a Los Angeles retreat during Oscars weekend.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.