About the Team
As a core member of our LLM Global Data Team, you will be at the heart of our operations, gaining first‑hand experience in training Large Language Models (LLMs) with diverse data sets.
Job Responsibilities
- Conduct research on the latest developments in AI safety across academia and industry, proactively identify limitations in existing evaluation paradigms and propose novel approaches to test models under real‑world and edge‑case scenarios.
- Design and continuously refine safety evaluations for multi‑models, defining and implementing robust evaluation metrics to assess safety‑related behaviors, failure modes, and alignment with responsible AI principles.
- Analyze safety evaluation results to surface issues stemming from model training, fine‑tuning, or product integration, translating findings into actionable insights to inform model iteration and product design improvements.
- Partner with cross‑functional stakeholders to build scalable safety evaluation workflows, establishing feedback loops that continuously inform model development and risk mitigation strategies.
- Manage end‑to‑end project lifecycles, including scoping, planning, execution, and delivery, effectively allocating team resources and coordinating efforts across functions to meet project goals and timelines.
Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting:
- Hate speech or harassment
- Self‑harm or suicide‑related content
- Violence or cruelty
- Child safety
Support resources and resilience training will be provided to support employee well‑being.
Qualifications
Minimum Qualifications
- Bachelor's degree or higher, preferably in AI policy, Computer Science, Engineering, journalism, international relations, law, regional studies, or a related discipline.
- Strong command of English in both written and verbal communication; proficiency in other languages is a plus.
- Strong analytical skills, with the ability to interpret qualitative and quantitative data and translate them into clear insights.
- Proven project management abilities, with experience leading cross‑functional initiatives in dynamic, fast‑paced environments.
- Creative problem‑solving mindset, comfortable working under ambiguity and leveraging tools and technology to improve processes and outputs.
Preferred Qualifications
- Professional experience in AI safety, Trust & Safety, Risk consulting, or Risk management; experience working at or with AI companies is highly desirable.
- Intellectually curious, self‑motivated, detail‑oriented, and team‑oriented.
- Deep interest in emerging technologies, user behavior, and the human impact of AI systems; enthusiasm for learning from real‑world case studies and applying insights in a high‑impact setting.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.