Job Search and Career Advice Platform

Enable job alerts via email!

Adversarial AI Testing

Freelancing.my

Remote

MYR 100,000 - 150,000

Full time

Yesterday
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A technology firm specializing in AI testing is seeking a Human Data Expert to red team conversational AI models. Candidates will generate high-quality human data and document vulnerabilities for improved AI safety. Ideal candidates bring red teaming experience, are communicative, and can use structured frameworks. This position offers the chance to work on sensitive and important projects with clear guidelines to support wellness.

Qualifications

  • Prior red teaming experience in AI adversarial work.
  • Ability to explain risks to various stakeholders.
  • Experience with socio-technical probing is a plus.

Responsibilities

  • Red team conversational AI models and agents.
  • Generate high-quality human data and annotate failures.
  • Document reproducibly with reports and datasets.

Skills

Red teaming experience
Curiosity and adversarial thinking
Structured frameworks
Clear communication
Adaptability
Job description

We are assembling a human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red team data that makes AI safer for our customers.

This project involves reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Before being exposed to any content, the topics will be clearly communicated.

What You’ll Do
  • Red team conversational AI models and agents: jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent
  • Document reproducibly: produce reports, datasets, and attack cases customers can act on
Who You Are
  • You bring prior red teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
  • You’re curious and adversarial: you instinctively push systems to breaking points
  • You’re structured: you use frameworks or benchmarks, not just random hacks
  • You’re communicative: you explain risks clearly to technical and non-technical stakeholders
  • You’re adaptable: thrive on moving across projects and customers
Nice-to-Have Specialties
  • Socio-technical risk: harassment/disinfo probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, writing for unconventional adversarial thinking
What Success Looks Like
  • You uncover vulnerabilities automated tests miss
  • You deliver reproducible artifacts that strengthen customer AI systems
  • Mercor customers trust the safety of their AI because you’ve already probed it like an adversary

For more info or to apply directly here.

Be careful - Don’t provide your bank or credit card details when applying for jobs. Don't transfer any money or complete suspicious online surveys. If you see something suspicious, report this job ad .

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.