Job Search and Career Advice Platform

Activez les alertes d’offres d’emploi par e-mail !

Conversational AI Quality Assessor Remote

Mercor

À distance

EUR 40 000 - 60 000

Temps partiel

Hier
Soyez parmi les premiers à postuler

Générez un CV personnalisé en quelques minutes

Décrochez un entretien et gagnez plus. En savoir plus

Résumé du poste

A leading AI research company seeks an AI Model Evaluator to assess the performance of LLM-generated responses. Applicants should have a Bachelor's degree, be fluent in Italian, and possess significant experience with large language models and strong analytical skills. The position offers flexible work options and involves detailed evaluation, annotation of responses, and adherence to strict evaluation guidelines. Applicants can complete the application process online, which includes uploading a resume and an AI interview.

Qualifications

  • Bachelor’s degree required.
  • Experience with large language models is essential.
  • Strong writing and analytical skills needed.

Responsabilités

  • Evaluate LLM-generated responses and their effectiveness.
  • Conduct fact-checking using trusted public sources.
  • Annotate response strengths and areas for improvement.

Connaissances

Native speaker or ILR 5/primary fluency in Italian
Significant experience using large language models (LLMs)
Excellent writing skills
Strong attention to detail
Excellent college-level mathematics skills

Formation

Bachelor’s degree
Description du poste
About The Job

Mercor connects elite creative and technical talent with leading AI research labs. Headquartered in San Francisco, our investors include Benchmark, General Catalyst, Peter Thiel, Adam D'Angelo, Larry Summers, and Jack Dorsey.

Position

AI Model Evaluator

Type

Full-time or Part-time Contract Work

Compensation

$36/hour

Location

Geography restricted to Europe, USA

Role Responsibilities
  • Evaluate LLM-generated responses on their ability to effectively answer user queries.
  • Conduct fact-checking using trusted public sources and external tools.
  • Generate high-quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuracies.
  • Assess reasoning quality, clarity, tone, and completeness of responses.
  • Ensure model responses align with expected conversational behavior and system guidelines.
  • Apply consistent annotations by following clear taxonomies, benchmarks, and detailed evaluation guidelines.
Qualifications
Must-Have
  • Bachelor’s degree
  • Native speaker or ILR 5/primary fluency (C2 on the CEFR scale) in Italian
  • Significant experience using large language models (LLMs)
  • Excellent writing skills
  • Strong attention to detail
  • Adaptable and comfortable moving across topics, domains, and customer requirements
  • Background or experience in domains requiring structured analytical thinking
  • Excellent college-level mathematics skills
Preferred
  • Prior experience with RLHF, model evaluation, or data annotation work
  • Experience writing or editing high-quality written content
  • Experience comparing multiple outputs and making fine-grained qualitative judgments
  • Familiarity with evaluation rubrics, benchmarks, or quality scoring systems
Application Process (Takes 20–30 mins to complete)
  • Upload resume
  • AI interview based on your resume
  • Submit form
Resources & Support
  • For details about the interview process and platform information, please check: https://talent.docs.mercor.com/welcome/welcome
  • For any help or support, reach out to: support@mercor.com

PS: Our team reviews applications daily. Please complete your AI interview and application steps to be considered for this opportunity.

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.