Job Search and Career Advice Platform

Activez les alertes d’offres d’emploi par e-mail !

Bilingual LLM Assessment Analyst $36/hr

Mercor

À distance

EUR 40 000 - 60 000

Plein temps

Hier
Soyez parmi les premiers à postuler

Générez un CV personnalisé en quelques minutes

Décrochez un entretien et gagnez plus. En savoir plus

Résumé du poste

A leading AI recruitment firm based in France is looking for an AI Model Evaluator to assess and annotate LLM-generated responses. The ideal candidate will possess a Bachelor's degree and be a native Italian speaker with significant experience in working with large language models. Responsibilities include evaluating response quality and ensuring compliance with conversational guidelines. A keen attention to detail and strong writing skills are essential for this position. This is a full-time or part-time contract position, offering $36/hour.

Responsabilités

  • Evaluate LLM-generated responses to user queries.
  • Conduct fact-checking with trusted sources.
  • Generate evaluation data by annotating response strengths.
  • Assess reasoning quality, clarity, tone, and completeness.
  • Ensure alignment with conversational guidelines.
  • Apply consistent annotations following clear guidelines.

Connaissances

Native speaker or ILR 5/primary fluency (C2 on the CEFR scale) in Italian
Significant experience using large language models (LLMs)
Excellent writing skills
Strong attention to detail
Adaptable across topics
Background in structured analytical thinking
Excellent college-level mathematics skills

Formation

Bachelor’s degree
Description du poste
About The Job

Mercor connects elite creative and technical talent with leading AI research labs. Headquartered in San Francisco, our investors include Benchmark, General Catalyst, Peter Thiel, Adam D'Angelo, Larry Summers, and Jack Dorsey.

Position

AI Model Evaluator

Type

Full-time or Part-time Contract Work

Compensation

$36/hour

Location

Geography restricted to Europe, USA

Role Responsibilities
  • Evaluate LLM-generated responses on their ability to effectively answer user queries.
  • Conduct fact-checking using trusted public sources and external tools.
  • Generate high-quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuracies.
  • Assess reasoning quality, clarity, tone, and completeness of responses.
  • Ensure model responses align with expected conversational behavior and system guidelines.
  • Apply consistent annotations by following clear taxonomies, benchmarks, and detailed evaluation guidelines.
Qualifications
Must-Have
  • Bachelor’s degree
  • Native speaker or ILR 5/primary fluency (C2 on the CEFR scale) in Italian
  • Significant experience using large language models (LLMs)
  • Excellent writing skills
  • Strong attention to detail
  • Adaptable and comfortable moving across topics, domains, and customer requirements
  • Background or experience in domains requiring structured analytical thinking
  • Excellent college-level mathematics skills
Preferred
  • Prior experience with RLHF, model evaluation, or data annotation work
  • Experience writing or editing high-quality written content
  • Experience comparing multiple outputs and making fine-grained qualitative judgments
  • Familiarity with evaluation rubrics, benchmarks, or quality scoring systems
Application Process (Takes 20–30 mins to complete)
  • Upload resume
  • AI interview based on your resume
  • Submit form
Resources & Support
  • For details about the interview process and platform information, please check: https://talent.docs.mercor.com/welcome/welcome
  • For any help or support, reach out to: support@mercor.com

PS: Our team reviews applications daily. Please complete your AI interview and application steps to be considered for this opportunity.

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.