Job Search and Career Advice Platform

Enable job alerts via email!

Model Safety Analyst - Seed Global Data

BYTEDANCE PTE. LTD.

Singapore

On-site

SGD 80,000 - 100,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading tech firm in Singapore seeks a candidate for a role focused on AI safety evaluations. You will conduct research on AI safety, design evaluation metrics, and analyze results to inform model improvements. A Bachelor's degree in a related field and strong English communication skills are required. The position offers the opportunity to engage with cutting-edge AI technologies and contribute to the safety of AI systems in a dynamic environment.

Benefits

Support resources for employee well-being

Qualifications

  • Strong command of English in both written and verbal communication.
  • Experience leading cross-functional initiatives in dynamic environments.
  • Creative problem-solving mindset under ambiguity.

Responsibilities

  • Conduct research on AI safety developments.
  • Design safety evaluations for models.
  • Analyze safety evaluation results for actionable insights.
  • Partner with stakeholders to build safety evaluation workflows.
  • Manage project lifecycles and allocate team resources.

Skills

Research on AI safety
Project management
Analytical skills
Communication skills

Education

Bachelor's degree or higher in a related discipline
Job description
Responsibilities

About the team: As a core member of our LLM Global Data Team, you'll be at the heart of our operations. Gain first‑hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets.

Job Responsibilities
  • Conduct research on the latest developments in AI safety across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to test models under real-world and edge-case scenarios.
  • Design and continuously refine safety evaluations for multi-models. Define and implement robust evaluation metrics to assess safety-related behaviors, failure modes, and alignment with responsible AI principles.
  • Conduct a thorough analysis of safety evaluation results to surface safety issues stemming from model training, fine‑tuning, or product integration. Translate these findings into actionable insights to inform model iteration and product design improvements.
  • Partner with cross‑functional stakeholders to build scalable safety evaluation workflows. Help establish feedback loops that continuously inform model development and risk mitigation strategies.
  • Manage end‑to‑end project lifecycles, including scoping, planning, execution, and delivery. Effectively allocate team resources and coordinate efforts across functions to meet project goals and timelines.

Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:

  • Hate speech or harassment
  • Self‑harm or suicide‑related content
  • Violence or cruelty
  • Child safety

Support resources and resilience training will be provided to support employee well‑being.

Qualifications
Minimum Qualifications
  • Bachelor's degree or higher, preferably in AI policy, Computer Science, Engineering, journalism, international relations, law, regional studies, or a related discipline.
  • Strong command of English in both written and verbal communication. Proficiency in other languages is a plus, as some projects may involve cross‑regional collaboration or content in non‑English languages.
  • Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights.
  • Proven project management abilities, with experience leading cross‑functional initiatives in dynamic, fast‑paced environments.
  • Creative problem‑solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs.
Preferred Qualifications
  • Professional experience in AI safety, Trust & Safety, Risk consulting, or Risk management. Experience working at or with AI companies is highly desirable.
  • Intellectually curious, self‑motivated, detail‑oriented, and team‑oriented.
  • Deep interest in emerging technologies, user behavior, and the human impact of AI systems. Enthusiasm for learning from real‑world case studies and applying insights in a high‑impact setting.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.