
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A visionary AI company is looking for a role focused on designing structured evaluation scenarios for LLM-based agents. The successful candidate will create test cases and define behavior for AI tasks, ensuring clarity and accuracy. This position offers remote flexibility and competitive rates, ideal for individuals passionate about shaping AI's future while working on their own schedule.
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
What we do The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting‑edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real‑world expertise from across the globe.
We’re looking for someone who can design realistic and structured evaluation scenarios for LLM‑based agents. You’ll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule.
From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone. Contribute on your own schedule, from anywhere in the world. This opportunity allows you to: