Job Search and Career Advice Platform

Enable job alerts via email!

Senior Model Safety Analyst

BYTEDANCE PTE. LTD.

Singapore

On-site

SGD 90,000 - 120,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading technology company in Singapore is looking for an experienced AI Safety Lead. You will lead research on AI safety, design evaluations for multi-models, and manage projects focusing on risk mitigation. The ideal candidate has a Bachelor's degree, proficiency in English and Mandarin, and strong analytical and project management skills. Join us to shape the future of AI responsibly, while ensuring employee wellbeing through support resources.

Benefits

Support resources and resilience training

Qualifications

  • Bachelor's degree or higher in AI, International Relations, or related fields.
  • Exceptional proficiency in English and Mandarin for effective communication.
  • Strong analytical skills to interpret data and translate insights.

Responsibilities

  • Lead research on AI safety developments and propose evaluation approaches.
  • Design safety evaluations for models and implement robust metrics.
  • Analyze safety evaluation results and inform model iterations.
  • Partner with stakeholders to build scalable safety evaluation workflows.
  • Manage end-to-end project lifecycles and coordinate team resources.

Skills

English proficiency
Mandarin proficiency
Analytical skills
Project management
Creative problem-solving

Education

Bachelor's degree in relevant field
Job description
Responsibilities

About the team

As a core member of our Seed Global Data Team, you'll be at the heart of our operations. Gain first‑hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets.

Job Responsibilities
  • Lead research on the latest developments in AI safety across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to test models under real‑world and edge‑case scenarios.
  • Design and continuously refine safety evaluations for multi-models. Define and implement robust evaluation metrics to assess safety‑related behaviors, failure modes, and alignment with responsible AI principles.
  • Lead thorough analysis of safety evaluation results to surface safety issues stemming from model training, fine‑tuning, or product integration. Translate these findings into actionable insights to inform model iteration and product design improvements.
  • Partner with cross‑functional stakeholders to build scalable safety evaluation workflows. Help establish feedback loops that continuously inform model development and risk mitigation strategies.
  • Manage end‑to‑end project lifecycles, including scoping, planning, execution, and delivery. Effectively allocate team resources and coordinate efforts across functions to meet project goals and timelines.

Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:

  • Hate speech or harassment
  • Self‑harm or suicide‑related content
  • Violence or cruelty
  • Child safety

Support resources and resilience training will be provided to support employee well‑being.

Qualifications
Minimum Qualifications
  • Bachelor's degree or higher in a relevant field (e.g., AI, International Relations, Regional Studies, Engineering, Public Policy, or related disciplines).
  • Exceptional proficiency in both English and Mandarin, with strong written and oral communication skills required to collaborate with internal teams and stakeholders across English and Mandarin‑speaking regions.
  • Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights.
  • Proven project management abilities, with experience leading cross‑functional initiatives in dynamic, fast‑paced environments.
  • Creative problem‑solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs.
Preferred Qualifications
  • 5+ years of professional experience in AI safety, Trust & Safety, risk consulting, or risk management. Experience working at or with AI companies is highly desirable.
  • Previously, team management experience will be a plus.
  • Intellectually curious, self‑motivated, detail‑oriented, and team‑oriented.
  • Deep interest in emerging technologies, user behavior, and the human impact of AI systems. Enthusiasm for learning from real‑world case studies and applying insights in a high‑impact setting.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.