Job Search and Career Advice Platform

Enable job alerts via email!

Principal Physical AI Engineer (Edge Autonomy)

ST Engineering Geo+Insights & Satellite Systems

Singapore

On-site

SGD 100,000 - 125,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology company in Singapore is seeking a Principal Physical AI Engineer to work on the development and integration of advanced Machine Learning models for robotic systems. The role involves training Generative AI models and deploying them in real-world applications. Ideal candidates are knowledgeable in robotics and AI, with experience in model optimization and robotics middleware. This position offers an opportunity to engage in cutting-edge research and development in the field of Physical AI.

Benefits

Opportunity to work on state-of-the-art AI models
Exposure to both AI and hardware environments

Qualifications

  • 3 - 10 years of experience building robotics/Physical AI systems.
  • Strong foundation in ROS2 or robotics middleware integration.
  • Extensive experience building/fine-tuning of AI/ML models for robotic systems.

Responsibilities

  • Develop and integrate AI/ML models for perception and decision-making.
  • Implement model optimization for real-world robotic applications.
  • Evaluate model performance in both simulation and hardware deployment.

Skills

AI/ML model integration
Robotics middleware integration
Model compression/acceleration
GPU/CUDA experience
Robotics model training pipelines

Education

Bachelor’s/Master’s in Computer Science, Machine Learning, AI, Robotics

Tools

ROS2
Jetson
Embedded Linux systems
Job description

Title: Principal Physical AI Engineer (Edge Autonomy) Job ID: 20268 Location: Elect – 100 Jurong East Street, SG

About the Role

We are looking for Physical AI Engineers focused on developing and integrating Machine Learning (including Generative AI) models that enable real robots to perceive, reason, and act autonomously. This role bridges Robotics and Agentic AI — working on perception, decision-making, and multi-model integration for multi-robot and embodied systems. You will work on training and adapting Generative/Foundation models (vision, language, planning, VLA models, swarm reasoning) and deploying them onto edge and robotic hardware.

Responsibilities
  • Develop and integrate AI/ML and Generative/Foundation models for perception, mapping, task planning, and embodied decision-making.
  • Fine-tune or adapt Generative/Foundation models (Small Language Model/ Small-Vision-Language/ Vision-Language-Action) models for edge deployment for real-world robotic applications.
  • Implement on-device model optimization (quantization, distillation, TensorRT, etc.).
  • Build data pipelines: simulation → real-world → feedback loop for continual model improvement.
  • Conduct domain adaptation and reinforcement/interactive learning for robot skills.
  • Integrating models into ROS2 and swarm autonomy stacks.
  • Evaluate model performance in both simulation and hardware deployment.
  • Optimize inference latency and reliability on edge compute platforms.
  • Formulate the conceptual and detailed technical solution for the development of applications to meet customer requirements.
  • Provide recommendations on relevant emerging technology in Physical AI/Robotics to senior management.
  • Identifying and leading strategic technical capability development for Physical AI/Robotics.
  • Collaborate on research and development projects to explore new capabilities and applications for Physical AI/Robotics technology.
Minimum Requirements
  • Bachelor’s/Master’s in Computer Science, Machine Learning, AI, Robotics, or related field.
  • 3 – 10 years of experience building robotics/Physical AI systems. Fresh grads with relevant skills are also welcome to apply.
  • Strong foundation in ROS2 or robotics middleware integration.
  • Extensive experience building/fine-tuning of AI/ML and Generative/Foundation models (vision, transformer-based, or RL) for robotic systems.
  • Good knowledge of model compression/acceleration for embedded devices (Jetson, Pi, etc).
  • GPU/CUDA experience or on-device inference for embedded AI.
  • Experienced with deployment on embedded/edge Linux systems.
  • Good understanding of robotics model training pipelines (perception → planning → control).
Preferred Experience
  • Vision-Language-Action (VLA) or Multi-Modal model experience.
  • Reinforcement learning / imitation learning / interactive training loops.
  • Synthetic data and simulation-based training (Isaac Sim, Habitat, etc.).
  • Knowledge of distributed training pipelines or MLOps for robotics.
Additional Skills
  • Strong experimentation and iterative problem-solving mindset.
  • Comfortable bridging research prototypes into production-grade systems.
  • Able to collaborate tightly with AI and systems engineers.
  • Curious and self-driven — able to explore new Physical AI approaches and rapidly test them.
What We Offer
  • Opportunity to work on state-of-the-art embodied AI models powering real robots.
  • Combination of research and deployment — not just writing models, but seeing them act in the physical world.
  • High-impact work on cutting-edge robotic autonomy and swarm behaviours.
  • Exposure to both AI and hardware-level execution environments.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.