Enable job alerts via email!

Evaluation Scenario Writer - QA

Mindrift

United Kingdom

Remote

GBP 60,000 - 80,000

Part time

Today
Be an early applicant

Job summary

A leading AI consulting firm is seeking an Evaluation Scenario Writer – QA to validate test scenarios for large language models. This part-time, flexible role requires strong QA skills and offers rates up to $44/hour. Applicants should have a background in test design, critical thinking abilities, and familiarity with scripting languages like Python or JS. Remote work allows contributors to apply their expertise on their schedule and gain additional experience in AI systems.

Benefits

Flexible working hours
Work from anywhere
Competitive pay based on expertise

Qualifications

  • Strong QA background (manual or automation) in complex testing environments.
  • Ability to spot logical inconsistencies in test scenarios.
  • Experience debugging structured test formats (JSON, YAML).

Responsibilities

  • Review and validate test scenarios from Evaluation Writers.
  • Spot logical inconsistencies and suggest improvements.
  • Collaborate with developers to automate parts of the review.

Skills

QA background
Understanding of test design
Ability to evaluate logic of test scenarios
Experience reviewing structured test cases
Familiarity with Python and JS
Job description

At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.

The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe.

Who we’re looking for

We’re looking for curious and intellectually proactive contributors who never miss an error and can think outside of the box when brainstorming solutions. Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated?

This is a flexible, project‑based opportunity well‑suited for:

  • Analysts, researchers, or consultants with strong critical thinking skills
  • Students (senior undergrads / grad students) looking for an intellectually interesting gig
  • People open to a part‑time and non‑permanent opportunity
About the project

We’re on the hunt for an Evaluation Scenario Writer – QA for a new project focused on ensuring the quality and correctness of evaluation scenarios created for LLM agents. This project opportunity blends manual scenario validation, automated test thinking, and collaboration with writers and engineers. You will verify test logic, flag inconsistencies, and help maintain a high bar for evaluation coverage and clarity.

What you’ll be doing
  • Reviewing and validating test scenarios from Evaluation Writers
  • Spotting logical inconsistencies, ambiguities, or missing checks
  • Suggesting improvements to structure, edge cases, or scoring logicCollaborating with infrastructure and tool developers to automate parts of the review
  • Creating clean and testable examples for others to follow

Although we’re only looking for experts for this current project, contributors with consistent high‑quality submissions may receive an invitation for ongoing collaboration across future projects.

How to get started

Apply to this post, qualify, and get the chance to contribute to a project aligned with your skills, on your own schedule. Shape the future of AI while building tools that benefit everyone.

Requirements

The ideal contributor will have:

  • Strong QA background (manual or automation), preferably in complex testing environments
  • Understanding of test design, regression testing, and edge case detection
  • Ability to evaluate logic and structure of test scenarios (even if written by others)
  • Experience reviewing and debugging structured test case formats (JSON, YAML)
  • Familiarity with Python and JS scripting for test automation or validation
  • Clear communication and documentation skills
  • Willingness to occasionally write or refactor test scenarios

We also value applicants who have:

  • Experience testing AI‑based systems or NLP applications
  • Familiarity with scoring systems and behavioral evaluation
  • Git/GitHub workflow familiarity (PR review, versioning of test cases)
  • Experience using test management systems or tracking tools
Benefits

Contribute on your own schedule, from anywhere in the world. This opportunity allows you to:

  • Get paid for your expertise, with rates that can go up to $44/hour depending on your skills, experience, and project needs
  • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
  • Participate in an advanced AI project and gain valuable experience to enhance your portfolio
  • Influence how future AI models understand and communicate in your field of expertise
Seniority level
  • Internship
Employment type
  • Part‑time
Job function
  • Other
Industries
  • IT Services and IT Consulting

Referrals increase your chances of interviewing at Mindrift by 2x

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.