Responsibilities
About the team
As a core member of our Seed Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets.
Job Responsibilities
- Lead research on the latest developments in AI safety across academia and industry. Proactively identify limitations in existing evaluation paradigms and propose novel approaches to test models under real-world and edge‑case scenarios.
- Design and continuously refine safety evaluations for multi-models. Define and implement robust evaluation metrics to assess safety‑related behaviors, failure modes, and alignment with responsible AI principles.
- Lead thorough analysis of safety evaluation results to surface safety issues stemming from model training, fine‑tuning, or product integration. Translate these findings into actionable insights to inform model iteration and product design improvements.
- Partner with cross‑functional stakeholders to build scalable safety evaluation workflows. Help establish feedback loops that continuously inform model development and risk mitigation strategies.
- Manage end‑to‑end project lifecycles, including scoping, planning, execution, and delivery. Effectively allocate team resources and coordinate efforts across functions to meet project goals and timelines.
Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:
- Hate speech or harassment
- Self‑harm or suicide‑related content
- Violence or cruelty
- Child safety
Support resources and resilience training will be provided to support employee well‑being.
Qualifications
Minimum Qualifications
- Bachelor's degree or higher in a relevant field (e.g., AI, International Relations, Regional Studies, Engineering, Public Policy, or related disciplines).
- Exceptional proficiency in both English and Mandarin, with strong written and oral communication skills required to collaborate with internal teams and stakeholders across English and Mandarin‑speaking regions.
- Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights.
- Proven project management abilities, with experience leading cross‑functional initiatives in dynamic, fast‑paced environments.
- Creative problem‑solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs.
Preferred Qualifications
- 5+ years of Professional experience in AI safety, Trust & Safety, Risk consulting, or Risk management. Experience working at or with AI companies is highly desirable.
- Previously, team management experience will be a plus
- Intellectually curious, self‑motivated, detail‑oriented, and team‑oriented.
- Deep interest in emerging technologies, user behavior, and the human impact of AI systems. Enthusiasm for learning from real‑world case studies and applying insights in a high‑impact setting.