Job Search and Career Advice Platform

Enable job alerts via email!

Physical AI Engineer (Edge Autonomy)

ST Engineering Geo+Insights & Satellite Systems

Singapore

On-site

SGD 80,000 - 120,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A premier engineering firm in Singapore seeks a Physical AI Engineer specialized in developing and integrating AI models for robotics. Candidates should have a Bachelor's/Master's in relevant fields and 3 years of experience in robotics/Physical AI systems. The role involves deploying models on robotic hardware and optimizing performance for real-world applications. Join us to work on state-of-the-art AI models that enable real robots to operate autonomously. This position provides opportunities for significant research and deployment impact.

Benefits

Opportunity to work on cutting-edge AI models
High-impact work in robotic autonomy
Combination of research and deployment

Qualifications

  • 3 years of experience building robotics/Physical AI systems.
  • Good knowledge of model compression for embedded devices.
  • Strong foundation in robotics middleware integration.

Responsibilities

  • Develop and integrate AI/ML models for robotics.
  • Conduct domain adaptation and reinforcement learning.
  • Provide recommendations on emerging Physical AI technologies.

Skills

ROS2 integration
AI/ML model fine-tuning
Generative AI experience
Robotics system development
Model optimization for edge

Education

Bachelor’s/Master’s in Computer Science or related field

Tools

TensorRT
CUDA
Linux systems
Job description

Title: Physical AI Engineer (Edge Autonomy)

Job ID: 20267

Location: Elect – 100 Jurong East Street, SG

Description

About the Role: We are looking for Physical AI Engineers focused on developing and integrating Machine Learning (including Generative AI) models that enable real robots to perceive, reason, and act autonomously. This role bridges Robotics and Agentic AI — working on perception, decision‑making, and multi‑model integration for multi‑robot and embodied systems. You will work on training and adapting Generative/Foundation models (vision, language, planning, VLA models, swarm reasoning) and deploying them onto edge and robotic hardware.

Responsibilities
  • Develop and integrate AI/ML and Generative/Foundation models for perception, mapping, task planning, and embodied decision‑making.
  • Fine‑tune or adapt Generative/Foundation models (Small Language Model/ Small‑Vision‑Language/ Vision‑Language‑Action) models for edge deployment for real‑world robotic applications.
  • Implement on‑device model optimization (quantization, distillation, TensorRT, etc.).
  • Build data pipelines: simulation → real‑world → feedback loop for continual model improvement.
  • Conduct domain adaptation and reinforcement/interactive learning for robot skills.
  • Integrating models into ROS2 and swarm autonomy stacks.
  • Evaluate model performance in both simulation and hardware deployment.
  • Optimize inference latency and reliability on edge compute platforms.
  • Formulate the conceptual and detailed technical solution for the development of applications to meet customer requirements.
  • Provide recommendations on relevant emerging technology in Physical AI/Robotics to senior management.
  • Identifying and leading strategic technical capability development for Physical AI/Robotics.
  • Collaborate on research and development projects to explore new capabilities and applications for Physical AI/Robotics technology.
Minimum Requirements
  • Bachelor’s/Master’s in Computer Science, Machine Learning, AI, Robotics, or related field.
  • 3 years of experience building robotics/Physical AI systems. Candidates without work experience but with relevant skills are also welcome to apply.
  • Strong foundation in ROS2 or robotics middleware integration.
  • Extensive experience building/fine‑tuning of AI/ML and Generative/Foundation models (vision, transformer‑based, or RL) for robotic systems.
  • Good knowledge of model compression/acceleration for embedded devices (Jetson, Pi, etc).
  • GPU/CUDA experience or on‑device inference for embedded AI.
  • Experienced with deployment on embedded/edge Linux systems.Good understanding of robotics model training pipelines (perception → planning → control).
Preferred Experience
  • Vision‑Language‑Action (VLA) or Multi‑Modal model experience.
  • Reinforcement learning / imitation learning / interactive training loops.
  • Synthetic data and simulation‑based training (Isaac Sim, Habitat, etc.).
  • Knowledge of distributed training pipelines or MLOps for robotics.
Additional Skills
  • Strong experimentation and iterative problem‑solving mindset.
  • Comfortable bridging research prototypes into production‑grade systems.
  • Able to collaborate tightly with AI and systems engineers.
  • Curious and self‑driven — able to explore new Physical AI approaches and rapidly test them.
What We Offer
  • Opportunity to work on state‑of‑the‑art embodied AI models powering real robots.
  • Combination of research and deployment — not just writing models, but seeing them act in the physical world.
  • High‑impact work on cutting‑edge robotic autonomy and swarm behaviours.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.