
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A leading research institution in Singapore seeks a data scientist to apply computational modeling and data analysis to tackle online harms. The candidate will assess risks, collaborate with interdisciplinary teams, and must hold a PhD in a relevant field, with proficiency in Python or R. Candidates with no experience are welcome to apply. This role emphasizes an inclusive culture and professional growth.
The Centre for Advanced Technologies in Online Safety (CATOS; https://www.catos.sg) was established in 2023 to host Singapore's Online Trust and Safety (OTS) Programme, a national research programme that leads the advancement of whole-of-nation technology capabilities to monitor and tackle online harms. The Systems Engineering Pillar of CATOS focuses on translational research and development, including evaluating, testing and integrating research output into needle-moving applications. One of the major outcomes will be a technological platform with an integrated suite of deep tech OTS engines, which analyses various internet sites and platforms for fast-trending harmful online content, such as non-factual claims, deepfakes, and hateful and toxic content. Different combinations of OTS engines can then be integrated and adapted to address the specific requirements of various stakeholders.
This role will be responsible for applying data science, computational modelling, and social science theories to study, detect, and mitigate harmful behaviours across social media platforms. The candidate will work at the intersection of human behaviour, platform dynamics, and algorithmic governance to foster a safer and more inclusive online environment.
The candidate will assume the following roles and responsibilities:
Examine use cases of online trust and safety technologies, translating research findings into actionable insights that contribute to content moderation strategies, policy updates, and/or product design.
Study innovative data sources and computational tools, develop models and metrics to detect and assess risks (e.g., coordinated inauthentic behaviour, abuse patterns, bot activity).
Conduct quantitative and mixed-methods research to understand harmful behaviours (e.g., misinformation, hate speech, harassment, manipulation, etc.) and their spread on digital platforms.
Collaborate with data scientists, policy experts, engineers, and product managers to inform and evaluate safety-related decisions.
Stay current on academic and industry trends in trust and safety, online behaviour, and computational social science.
JOB REQUIREMENT:
Our research and engineering work is highly interdisciplinary, agile, and pragmatic in nature.
When applying, please share at least two representative research projects or publications that you have developed independently or contributed to significantly.
Please note that only shortlisted candidates will be notified.
The above eligibility criteria are not exhaustive. A*STAR may include additional selection criteria based on its prevailing recruitment policies. These policies may be amended from time to time without notice. We regret that only shortlisted candidates will be notified.