Job Search and Career Advice Platform

Enable job alerts via email!

*GOV* AI Quality Engineer | LLM | NLP

ScienTec Consulting

Singapore

On-site

SGD 70,000 - 90,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A consulting firm in tech is seeking an AI Quality Engineer in Singapore to ensure the accuracy and performance of Large Language Models in various applications. Responsibilities include designing tests, automating testing processes, and collaborating with AI teams. The ideal candidate has experience with LLMs and strong Python skills. This role offers an opportunity to influence AI quality improvements.

Qualifications

  • Experience testing LLMs for chatbots and conversational AI.
  • Familiarity with accuracy evaluation methods for high-stakes NLP applications.
  • Ability to document and track issues using tools like Jira.

Responsibilities

  • Design and execute test cases to assess LLM accuracy.
  • Detect and analyse hallucinations or fabricated outputs.
  • Develop automated test scripts to streamline LLM regression testing.
  • Conduct performance and stress tests for LLM-based systems.
  • Evaluate model output quality using NLP metrics.

Skills

Experience testing LLMs
Proficiency in test automation
Strong Python skills
Understanding of AI/NLP testing methodologies
Strong problem-solving skills

Tools

Jira
Job description

We are seeking an AI Quality Engineer to evaluate and ensure the accuracy, reliability, and performance of Large Language Models (LLMs) used in GenAI applications such as chatbots, classification tools, and RAG systems. The role focuses on identifying hallucinations, validating model behaviour, and supporting improvements through structured testing and collaboration.

Key Responsibilities
  • Design and execute test cases to assess LLM accuracy, relevance, and contextual correctness.
  • Detect and analyse hallucinations or fabricated outputs, and document them clearly.
  • Develop automated test scripts (Python, PyTest or similar) to streamline LLM regression testing.
  • Conduct functional and non-functional testing, including performance and stress tests for LLM-based systems.
  • Evaluate model output quality using NLP metrics and business-specific correctness rules.
  • Collaborate with AI engineers, data scientists, and product teams to improve model behaviour based on test findings.
  • Perform regression testing after fine-tuning, retraining, or system updates to ensure no degradation in accuracy.
  • Maintain structured documentation: test plans, test cases, test logs, and issue reports.
  • Use issue tracking tools (e.g., Jira) to report and track LLM-related bugs and inconsistencies.
  • Apply knowledge of LLMs, NLP concepts, and cloud-based AI environments (AWS/GCP/Azure preferred) to support comprehensive QA coverage.
Requirements
  • Experience testing LLMs (e.g., GPT, BERT) for chatbots and conversational AI.
  • Proficiency in test automation (PyTest, custom AI frameworks) to detect inaccuracies and hallucinations.
  • Familiarity with accuracy evaluation methods for high-stakes NLP applications.
  • Understanding of AI/NLP testing methodologies, including hallucination and relevance testing.
  • Strong Python skills for writing test scripts and analysing model outputs.
  • Ability to document and track issues using tools like Jira.
  • Strong problem-solving skills to propose improvements and reduce hallucinations.

By submitting your resume, you consent to the collection, use, and disclosure of your personal information per ScienTec’s Privacy Policy (scientecconsulting.com/privacy-policy).

This authorizes us to:

Contact you about potential opportunities.

Delete personal data as it is not required at this application stage.

All applications will be processed with strict confidence. Only shortlisted candidates will be contacted.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.