Enable job alerts via email!
A leading AI evaluation company seeks evaluators to assess model-generated responses for Tone and Fluency. This project-based role requires native-level fluency in a target language and strong English skills. Successful candidates will evaluate responses, providing insights on quality and naturalness while working from home. Make an impact in the field of AI by joining today.
Join Project Spearmint, a multilingual AI response evaluation project reviewing large language model (LLM) outputs in different languages, focused on either Tone or Fluency. Native-level fluency in a target language, along with strong English comprehension, is required.
As an evaluator, you will review short, pre-segmented datasets and assess model-generated replies based on specific quality dimensions. Your input will help validate evaluation frameworks and establish baseline quality metrics for future model development.
Key Responsibilities:
Project Breakdown:
Batch 1 — Tone: Determine whether replies are helpful, insightful, engaging, and fair. Flag formality mismatches, condescension, bias, or other tonal issues.
Batch 2 — Fluency: Assess grammatical accuracy, clarity, coherence, and natural flow.
This is a project-based opportunity with CrowdGen, where you will join the CrowdGen Community as an Independent Contractor. If selected, you will receive an email from CrowdGen regarding the creation of an account using your application email address. You will need to log in to this account, reset your password, complete the setup requirements, and proceed with your application for this role.
Make an impact on the future of AI — apply today and contribute from the comfort of your home.