Job Search and Career Advice Platform

Attiva gli avvisi di lavoro via e-mail!

Evaluation Scenario Writer - AI Agent Testing Specialist

Mindrift

Remoto

EUR 30.000 - 50.000

Part-time

Ieri
Candidati tra i primi

Genera un CV personalizzato in pochi minuti

Ottieni un colloquio e una retribuzione più elevata. Scopri di più

Descrizione del lavoro

A tech-driven AI company is seeking an individual to design evaluation scenarios for LLM-based agents. This role includes creating structured test cases and defining scoring logic for agent behaviors. Ideal candidates will have a strong background in Computer Science or related fields, with experience in QA or data analysis. The position offers flexible, remote work with a competitive pay rate of up to $30/hour, making it suitable for those looking to enhance their skills in the AI industry.

Servizi

Competitive hourly rate
Flexible work schedule
Experience in advanced AI projects

Competenze

  • Bachelor’s or Master’s Degree in Computer Science, Software Engineering, or related fields.
  • Experience in QA, software testing, data analysis, or NLP annotation.
  • Basic experience with Python and JavaScript.

Mansioni

  • Create structured test cases that simulate complex human workflows.
  • Define gold-standard behavior and scoring logic to evaluate agent actions.
  • Iterate on prompts, instructions, and test cases to improve clarity.

Conoscenze

Analytical mindset
Attention to detail
Strong written communication skills in English
Understanding of test design principles
Familiarity with JSON/YAML
Curiosity for AI-generated content

Formazione

Bachelor’s and/or Master’s Degree in relevant fields

Strumenti

Python
JavaScript
Descrizione del lavoro
Company Overview

At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.

What We Do

The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real‑world expertise from across the globe.

About the Role

We’re looking for someone who can design realistic and structured evaluation scenarios for LLM‑based agents. You’ll create test cases that simulate human‑performed tasks and define gold‑standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well‑scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.

Responsibilities
  • Create structured test cases that simulate complex human workflows.
  • Define gold‑standard behavior and scoring logic to evaluate agent actions.
  • Analyze agent logs, failure modes, and decision paths.
  • Work with code repositories and test frameworks to validate your scenarios.
  • Iterate on prompts, instructions, and test cases to improve clarity and difficulty.
  • Ensure scenarios are production‑ready, easy to run, and reusable.
How to Get Started

Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.

Requirements
  • Bachelor’s and/or Master’s Degree in Computer Science, Software Engineering, Data Science/Analytics, Artificial Intelligence/ML, Computational Linguistics/NLP, Information Systems or other related fields.
  • Background in QA, software testing, data analysis, or NLP annotation.
  • Good understanding of test design principles (e.g., reproducibility, coverage, edge cases).
  • Strong written communication skills in English.
  • Comfortable with structured formats like JSON/YAML for scenario description.
  • Can define expected agent behaviors (gold paths) and scoring logic.
  • Basic experience with Python and JavaScript.
  • Curious and open to working with AI‑generated content, agent logs, and prompt‑based behavior.
Nice to Have
  • Experience in writing manual or automated test cases.
  • Familiarity with LLM capabilities and typical failure modes.
  • Understanding of scoring metrics (precision, recall, coverage, reward functions).
Benefits
  • Get paid for your expertise, with rates that can go up to $30/hour depending on your skills, experience, and project needs.
  • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments.
  • Participate in an advanced AI project and gain valuable experience to enhance your portfolio.
  • Influence how future AI models understand and communicate in your field of expertise.
Location & Eligibility

This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.

Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.