Job Search and Career Advice Platform

Enable job alerts via email!

AI engineer

AIDX TECH PTE. LTD.

Singapore

On-site

SGD 70,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI technology firm in Singapore is seeking an AI Engineer to enhance its safety testing framework. The role involves developing innovative algorithms to detect and mitigate AI risks while ensuring compliance with safety standards. Candidates should have a Bachelor’s degree in Computer Science or related fields, solid expertise in machine learning, and proficiency in Python. This position offers opportunities for impactful contributions in AI safety and collaboration in a dynamic team environment.

Qualifications

  • Solid foundation in machine learning and deep learning.
  • Proficiency in Python and experience with AI frameworks.
  • Familiarity with AI safety principles and software engineering practices.

Responsibilities

  • Design and implement algorithms for AI safety evaluation.
  • Conduct research on AI safety challenges and mitigation strategies.
  • Collaborate with engineering teams to integrate algorithms into safety frameworks.

Skills

Machine learning
Deep learning
Algorithm design
Python
AI safety principles

Education

Bachelor’s degree in Computer Science, Information Technology or related fields

Tools

PyTorch
TensorFlow
JAX
Job description
Overview and Purpose

The AI Safety Algorithm Reinforcement project is a key initiative by AIDX aimed at strengthening and modernizing the AI safety testing algorithms integrated within our testing platform. It focuses on improving the robustness, accuracy, and efficiency of AI models to ensure they meet the highest safety and reliability standards before deployment.
The successful candidate will contribute to designing, developing, and validating safety algorithms that address emerging AI risks — including adversarial attacks, model bias, and interpretability challenges — ultimately enabling trustworthy and secure AI systems.

Main Objectives and Responsibilities

The AI Engineer will play a central role in enhancing the AIDX AI safety testing framework by designing and implementing innovative solutions to detect, analyze, and mitigate risks in AI models.
For example, ensuring robustness against adversarial attacks that could lead to unsafe outcomes, such as an autonomous system misinterpreting altered input signals.

Key responsibilities include:

  • Developing and optimizing algorithms that improve AI safety testing capabilities.
  • Collaborating with interdisciplinary teams to integrate new safety modules into the AIDX platform.
  • Ensuring that all updates are thoroughly tested, validated, and documented.
Key Duties and Responsibilities
1. Research and Analysis
  • Conduct in-depth research into emerging AI safety challenges, threats, and mitigation strategies.
  • Continuously review and synthesize cutting-edge AI safety literature, including adversarial robustness, alignment, interpretability, and risk detection.
  • Identify trends and translate theoretical advancements into practical algorithmic improvements.
2. Algorithm Development
  • Design and develop novel algorithms for AI model safety evaluation, covering areas like robustness testing, bias detection, and behavioral consistency.
  • Optimize existing algorithms to improve scalability, performance, and accuracy on real-world data.
  • Prototype and iterate rapidly to achieve measurable improvements in safety outcomes.
3. Testing and Validation
  • Design rigorous simulation and validation pipelines to assess algorithm reliability under diverse and adversarial conditions.
  • Quantitatively evaluate safety performance and benchmark results across models and datasets.
  • Develop testing protocols that ensure compliance with AIDX safety and reliability standards.
4. Collaboration and Integration
  • Partner closely with cross-functional engineering teams to integrate algorithms into the AIDX testing infrastructure.
  • Work with platform engineers to ensure seamless deployment, monitoring, and update cycles.
  • Contribute to system-level discussions on AI risk mitigation architecture and tooling.
5. Documentation and Reporting
  • Maintain clear, detailed documentation of research, development, testing, and validation processes.
  • Ensure all algorithmic updates are transparent and traceable for future audits and research.
  • Prepare technical reports and present findings to stakeholders.
6. Feedback and Continuous Improvement
  • Gather and incorporate feedback from stakeholder reviews, internal audits, and real-world testing.
  • Perform iterative updates to enhance the reliability and interpretability of algorithms.
Qualifications and Requirements

Educational Requirements (Minimum):

At least a Bachelor’s degree in one of the following faculties:

  • Computer Science
  • Information Technology
  • Programming & Systems Analysis
  • Science (Computer Studies)

Technical Requirements:

  • Solid foundation in machine learning, deep learning, and algorithm design.
  • Proficiency in Python and experience with frameworks such as PyTorch, TensorFlow, or JAX.
  • Familiarity with AI safety principles, including adversarial robustness, fairness, interpretability, and risk mitigation.
  • Understanding of software engineering practices, including version control, modular coding, and testing.
  • Experience with MLOps pipelines, continuous integration, and model evaluation workflows.

Preferred Skills and Experience:

  • Experience with reinforcement learning, adversarial training, or formal verification of AI models.
  • Prior involvement in research-driven AI projects or publications.
  • Familiarity with large-scale model testing, benchmarking, or AI compliance frameworks.
  • Strong analytical, documentation, and communication skills.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.