Enable job alerts via email!

QA Lead

Thoth AI

Kuala Lumpur

On-site

MYR 100,000 - 150,000

Full time

Yesterday
Be an early applicant

Job summary

A leading AI firm in Kuala Lumpur is seeking a highly skilled Lead QA to oversee annotation quality in Audio, Video, or LLM projects. This role involves managing QA teams, developing quality standards, and ensuring client compliance through continuous improvement initiatives. The ideal candidate has 3–5 years of experience in quality assurance and strong analytical skills, with a focus on maintaining high standards in data quality and integrity.

Qualifications

  • 3–5 years of experience in data labeling, quality assurance, or content review (preferably in AI data operations).
  • Proven track record in managing QA teams and driving performance improvements.
  • Strong analytical and problem-solving skills; able to interpret large sets of QA and error data.

Responsibilities

  • Lead and manage a team of QAs within the assigned project domain.
  • Develop and standardize QA rubrics and error typologies.
  • Conduct calibration sessions regularly with QAs, Trainers, and PMs.

Skills

Quality Leadership
Analytical Rigor
Collaboration & Communication
Detail Orientation
Process Discipline
Adaptability

Education

Bachelor’s degree in Linguistics, Data Science, Computer Science, Engineering, or a related field

Tools

Airtable
Smartsheet
Jira
Labelbox
Job description

We are seeking a highly skilled and analytical Lead QA to oversee annotation quality across one of our key data domains — Audio, Video, or LLM. The Lead QA plays a crucial role in ensuring that all outputs from QA and annotators meet the highest standards of accuracy, consistency, and compliance with client rubrics. This position also serves as the key quality gatekeeper, collaborating closely with Trainers, Project Managers (PMs), and QA teams to drive continuous improvement in annotation performance.

Key Responsibilities
  • Lead and manage a team of QAs within the assigned project domain (Audio / Video / LLM), ensuring quality objectives and SLAs are consistently met.
  • Develop and standardize QA rubrics, error typologies, and review guidelines based on project-specific annotation requirements.
  • Conduct calibration sessions regularly with QAs, Trainers, and PMs to maintain consistent quality interpretations across teams.
  • Analyze QA reports and annotation error trends to identify systemic issues and recommend corrective actions.
  • Collaborate with Trainers to translate recurring quality gaps into targeted training or retraining plans.
  • Perform quality audits on both QA and annotator performance to ensure review accuracy and reliability.
  • Monitor and report key quality metrics (Accuracy, Consistency, Disagreement Rate, Rejection Rate, etc.) to stakeholders.
  • Design and optimize QA sampling strategies to ensure efficient yet effective quality coverage.
  • Support new project launches by developing quality validation processes, test datasets, and pilot evaluation rubrics.
  • Drive continuous improvement initiatives through process standardization, tool optimization, and cross-domain knowledge sharing.
  • Ensure data integrity and confidentiality in line with client and internal security requirements.
  • Act as the domain quality expert and key escalation point for quality-related client or internal concerns.
Qualifications
  • Bachelor’s degree in Linguistics, Data Science, Computer Science, Engineering, or a related field.
  • 3–5 years of experience in data labeling, quality assurance, or content review (preferably in AI data operations).
  • Proven track record in managing QA teams and driving performance improvements.
  • Strong analytical and problem‑solving skills; able to interpret large sets of QA and error data.
  • Excellent communication and collaboration skills; able to align QA, training, and delivery stakeholders.
  • Familiarity with annotation tools, QA platforms, and performance tracking dashboards (e.g., Airtable, Smartsheet, Jira, Labelbox, etc.).
  • Prior experience in Audio, Video, or LLM annotation quality management.
  • Knowledge of process improvement methodologies (Six Sigma, Kaizen, or equivalent).
  • Experience designing QA rubrics and calibration frameworks.
  • Background in AI data services, MLOps, or data quality governance.
Core Competencies
  • Quality Leadership: Demonstrates authority in quality governance and guides QA teams toward continuous improvement.
  • Analytical Rigor: Applies data‑driven insights to identify root causes and develop effective solutions.
  • Collaboration & Communication: Works closely with Trainers and PMs to ensure alignment across functions.
  • Detail Orientation: Maintains exceptional focus on data precision and rubric compliance.
  • Process Discipline: Establishes and enforces standardized QA processes across the project lifecycle.
  • Adaptability: Handles evolving annotation guidelines and multi‑domain data challenges effectively.
Purpose of the Role

The Lead QA ensures that all annotation outputs meet defined quality benchmarks and client expectations. By managing QA teams and collaborating across training and project delivery, this role safeguards data accuracy and consistency—fundamental to the reliability and scalability of AI model training.

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.