
Attiva gli avvisi di lavoro via e-mail!
Genera un CV personalizzato in pochi minuti
Ottieni un colloquio e una retribuzione più elevata. Scopri di più
A tech-driven AI company is seeking an individual to design evaluation scenarios for LLM-based agents. This role includes creating structured test cases and defining scoring logic for agent behaviors. Ideal candidates will have a strong background in Computer Science or related fields, with experience in QA or data analysis. The position offers flexible, remote work with a competitive pay rate of up to $30/hour, making it suitable for those looking to enhance their skills in the AI industry.
At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real‑world expertise from across the globe.
We’re looking for someone who can design realistic and structured evaluation scenarios for LLM‑based agents. You’ll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.
Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.