Job Search and Career Advice Platform

Enable job alerts via email!

Product Leader (GenAI Safety Evaluation) - Platform Responsibility

TikTok Pte. Ltd.

Singapore

On-site

SGD 60,000 - 80,000

Full time

2 days ago
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech company based in Singapore is seeking a professional specializing in content safety. The role focuses on building and optimizing content safety systems using advanced AI methodologies. Responsibilities include evolving safety evaluation frameworks, creating benchmarks, and establishing root cause analysis processes. The ideal candidate will have a Bachelor's degree and extensive experience in content safety or AI-related fields, along with a strong understanding of machine learning and collaboration skills.

Qualifications

  • 5+ years of experience in strategy, data analysis, product, content safety, or AI/LLM-related roles.
  • Strong understanding of machine learning including model evaluation and data-driven decision-making.
  • Ability to collaborate with Algo/DS teams to interpret model performance.

Responsibilities

  • Own and evolve the GenAI / LLM safety evaluation framework.
  • Design and scale online safety evaluation systems.
  • Build and maintain safety benchmarks with real-world user behavior datasets.
  • Establish root cause analysis processes for issue detection.
  • Partner cross-functionally to translate safety findings into optimization roadmaps.

Skills

Machine learning fundamentals
Data analysis
Collaborative skills
Regulatory knowledge

Education

Bachelor's degree or above
Job description
Responsibilities

The Platform Responsibility team is at the forefront of building and optimizing content safety systems. With a focus on optimising and advancing content safety, we leverage advanced large language models to enhance review efficiency, risk control, and user trust. Working closely with business and technical stakeholders, we deliver scalable solutions that keep pace with rapid global growth.

  • Own and evolve the GenAI / LLM safety evaluation framework across multimodal use cases, including text-to-text, text-to-image, and text-to-video, with a focus on measuring model performance across core safety and risk dimensions.
  • Design and scale online safety evaluation systems, introducing new evaluation methodologies and driving automation and large-scale deployment to enable continuous model and product iteration.
  • Build and maintain safety benchmarks by creating high-quality evaluation datasets and adversarial test cases, ensuring benchmark distributions closely reflect real-world user behavior and risk scenarios.
  • Establish and continuously improve root cause analysis (RCA) processes, including standardized workflows for issue detection, attribution, and postmortem reviews, and distill reusable risk patterns and actionable optimization insights.
  • Partner cross-functionally with Model, Algorithm, Product, Data, and Safety teams to translate key safety findings into optimization roadmaps and ensure closed-loop validation of improvements.
Qualifications

Minimum Qualifications:

  • Bachelor's degree or above with 5+ years of experience in strategy, data analysis, product, content safety, or AI/LLM-related roles;
  • Strong understanding of machine learning fundamentals, including model evaluation, thresholding, and data-driven decision-making.
  • Proven ability to collaborate with Algo/DS teams to interpret model performance and drive ML-informed product iterations.
  • Experience translating regulatory or policy requirements into scalable product strategies.
Preferred Qualifications
  • Experience leading teams or owning complex cross-functional initiatives.
  • Self-motivated and results-driven, able to influence stakeholders in fast-paced, ambiguous environments.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.