Job Search and Career Advice Platform

Enable job alerts via email!

AI engineer

AIDX TECH PTE. LTD.

Singapore

On-site

SGD 60,000 - 80,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology firm in Singapore is seeking an AI Engineer to enhance AI safety testing frameworks by developing innovative algorithms. This role involves collaboration with cross-functional teams and demands strong proficiency in Python and machine learning. The ideal candidate has at least a Bachelor's degree in Computer Science or a related field, and familiarity with AI safety principles is preferred. Join us to make AI systems trustworthy and secure.

Qualifications

  • At least a Bachelor's degree in Computer Science, Information Technology, or related field.
  • Solid foundation in machine learning and deep learning.
  • Experience with MLOps pipelines and model evaluation workflows.

Responsibilities

  • Develop and optimize algorithms for AI safety.
  • Conduct in-depth research into emerging AI safety challenges.
  • Design rigorous validation pipelines for algorithm reliability.

Skills

Solid foundation in machine learning
Proficiency in Python
Familiarity with AI safety principles
Understanding of software engineering practices
Strong analytical skills

Education

Bachelor's degree in Computer Science or related field

Tools

PyTorch
TensorFlow
JAX
Job description
Overview and Purpose

The AI Safety Algorithm Reinforcement project is a key initiative by AIDX aimed at strengthening and modernizing the AI safety testing algorithms integrated within our testing platform. It focuses on improving the robustness, accuracy, and efficiency of AI models to ensure they meet the highest safety and reliability standards before deployment.
The successful candidate will contribute to designing, developing, and validating safety algorithms that address emerging AI risks — including adversarial attacks, model bias, and interpretability challenges — ultimately enabling trustworthy and secure AI systems.

Main Objectives and Responsibilities

The AI Engineer will play a central role in enhancing the AIDX AI safety testing framework by designing and implementing innovative solutions to detect, analyze, and mitigate risks in AI models.
For example, ensuring robustness against adversarial attacks that could lead to unsafe outcomes, such as an autonomous system misinterpreting altered input signals.

Key responsibilities include:

  • Developing and optimizing algorithms that improve AI safety testing capabilities.
  • Collaborating with interdisciplinary teams to integrate new safety modules into the AIDX platform.
  • Ensuring that all updates are thoroughly tested, validated, and documented.
Key Duties and Responsibilities
1. Research and Analysis
  • Conduct in-depth research into emerging AI safety challenges, threats, and mitigation strategies.
  • Continuously review and synthesize cutting-edge AI safety literature, including adversarial robustness, alignment, interpretability, and risk detection.
  • Identify trends and translate theoretical advancements into practical algorithmic improvements.
2. Algorithm Development
  • Design and develop novel algorithms for AI model safety evaluation, covering areas like robustness testing, bias detection, and behavioral consistency.
  • Optimize existing algorithms to improve scalability, performance, and accuracy on real-world data.
  • Prototype and iterate rapidly to achieve measurable improvements in safety outcomes.
3. Testing and Validation
  • Design rigorous simulation and validation pipelines to assess algorithm reliability under diverse and adversarial conditions.
  • Quantitatively evaluate safety performance and benchmark results across models and datasets.
  • Develop testing protocols that ensure compliance with AIDX safety and reliability standards.
4. Collaboration and Integration
  • Partner closely with cross-functional engineering teams to integrate algorithms into the AIDX testing infrastructure.
  • Work with platform engineers to ensure seamless deployment, monitoring, and update cycles.
  • Contribute to system-level discussions on AI risk mitigation architecture and tooling.
5. Documentation and Reporting
  • Maintain clear, detailed documentation of research, development, testing, and validation processes.
  • Ensure all algorithmic updates are transparent and traceable for future audits and research.
  • Prepare technical reports and present findings to stakeholders.
6. Feedback and Continuous Improvement
  • Gather and incorporate feedback from stakeholder reviews, internal audits, and real-world testing.
  • Perform iterative updates to enhance the reliability and interpretability of algorithms.
Qualifications and Requirements

Educational Requirements (Minimum):

At least a Bachelor’s degree in one of the following faculties:

  • Computer Science
  • Information Technology
  • Programming & Systems Analysis
  • Science (Computer Studies)

Technical Requirements:

  • Solid foundation in machine learning, deep learning, and algorithm design.
  • Proficiency in Python and experience with frameworks such as PyTorch, TensorFlow, or JAX.
  • Familiarity with AI safety principles, including adversarial robustness, fairness, interpretability, and risk mitigation.
  • Understanding of software engineering practices, including version control, modular coding, and testing.
  • Experience with MLOps pipelines, continuous integration, and model evaluation workflows.

Preferred Skills and Experience:

  • Experience with reinforcement learning, adversarial training, or formal verification of AI models.
  • Prior involvement in research-driven AI projects or publications.
  • Familiarity with large-scale model testing, benchmarking, or AI compliance frameworks.
  • Strong analytical, documentation, and communication skills.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.