Enable job alerts via email!
Boost your interview chances
Create a job specific, tailored resume for higher success rate.
Join Appen Limited for Project Spearmint, evaluating AI responses in multiple languages. As an evaluator, you'll review model-generated replies focusing on Tone or Fluency, helping to establish quality metrics vital to AI development. Make a meaningful impact while enjoying the convenience of working from home.
Join Project Spearmint, a multilingual AI response evaluation project reviewing large language model (LLM) outputs in different languages, focused on either Tone or Fluency. Native-level fluency in a target language, along with strong English comprehension, is required.
As an evaluator, you will review short, pre-segmented datasets and assess model-generated replies based on specific quality dimensions. Your input will help validate evaluation frameworks and establish baseline quality metrics for future model development.
Key Responsibilities:
- Evaluate model replies in your native language based on either Tone or Fluency.
- Assess the overall quality, correctness, and naturalness of responses.
- Read the user prompt and two model replies, then rate each using a five-point scale.
- Provide brief rationales for any extreme ratings.
Project Breakdown:
Batch 1 – Tone: Determine whether replies are helpful, insightful, engaging, and fair. Flag formality mismatches, condescension, bias, or other tonal issues.
Batch 2 – Fluency: Assess grammatical accuracy, clarity, coherence, and natural flow.
This is a project-based opportunity with CrowdGen, where you will join the CrowdGen Community as an Independent Contractor. If selected, you will receive an email from CrowdGen regarding the creation of an account using your application email address. You will need to log in to this account, reset your password, complete the setup requirements, and proceed with your application for this role.
Make an impact on the future of AI – apply today and contribute from the comfort of your home.