Ativa os alertas de emprego por e-mail!

English Language Specialist

iMerit Technology

Sapucaia do Sul

Teletrabalho

BRL 120.000 - 160.000

Tempo integral

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A leading technology evaluation firm seeks a detail-oriented Multimodal GenAI Evaluation Analyst in Brazil. The role involves evaluating AI outputs across text, image, and video modalities, ensuring quality and cultural alignment. Ideal candidates have a relevant degree and experience in data annotation or AI evaluation. The role offers competitive compensation and remote work arrangements.

Serviços

Competitive compensation
Flexible remote working arrangements
Continuous learning opportunities

Qualificações

  • 1+ years of experience in data annotation, LLM evaluation, or related AI/ML domains.
  • Experience with data annotation tools and software platforms.
  • Ability to adapt quickly to changing project directions.

Responsabilidades

  • Evaluate outputs generated by LLMs across multiple modalities.
  • Assess quality against project-specific criteria such as correctness and coherence.
  • Identify subtle errors and biases in AI responses.

Conhecimentos

Critical reading skills
Observational skills
Evaluative skills
Attention to detail
Language comprehension (CEFR B2)

Formação académica

Bachelor's degree or equivalent

Ferramentas

Data annotation tools
Descrição da oferta de emprego

iMerit seeks detail-oriented and analytically minded Multimodal GenAI Evaluation Analysts to perform highly nuanced evaluations of AI system outputs across different modalities: text, image, video, and multimodal interactions. Analysts will assess the accuracy, appropriateness, quality, clarity, and cultural alignment of model outputs against complex guidelines, ensuring that results align with project standards and real-world use cases. These evaluations will directly inform the development and fine-tuning of advanced large language models (LLMs), vision models (LVMs), and multimodal AI systems.

Role Responsibilities

Evaluate outputs generated by LLMs across multiple modalities (text, image captions, video descriptions, and multimodal prompts).

Assess quality against project-specific criteria such as correctness, coherence, completeness, style, cultural appropriateness, and safety.

Identify subtle errors, hallucinations, or biases in AI responses.

Apply domain expertise and logical reasoning to resolve ambiguous or unclear outputs.

Provide detailed written feedback, tagging, and scoring of outputs to ensure consistency across the evaluation team.

Escalate unclear cases and contribute to refining evaluation guidelines.

Collaborate with Project Managers and Quality Leads to meet accuracy, reliability, and turnaround benchmarks.

Skills & Competencies

Strong critical reading, observational, and evaluative skills across different modalities.

Ability to articulate nuanced judgments with precision and clarity.

Excellent English comprehension (CEFR B2 or above); additional languages a plus.

Familiarity with LLMs, generative AI, and multimodal systems.

Strong attention to detail and ability to apply guidelines consistently.

Awareness of cultural and linguistic nuances, including potential bias and harm in AI outputs.

Comfort with evolving workflows, rapid feedback cycles, and complex quality frameworks.

Requirements

Bachelor's degree / diploma or equivalent educational qualification.

1+ years of experience in data annotation, LLM evaluation, content moderation, or related AI / ML domains.

Demonstrated experience working with data annotation tools and software platforms.

Strong understanding of language and multimodal communication (instruction following in image generation, fact-checking, narrative coherence in video, etc.).

Ability to adapt quickly to changing project directions and fast-paced work environments.

Previous experience creating or annotating complex data specifically for Large Language Model (LLM) training.

Prior exposure to generative AI, prompt engineering, or LLM fine-tuning workflows is a plus.

While moderation of high-harm / high-risk material is not part of this role, candidates should be aware that occasional exposure to NSFW or otherwise sensitive content may occur due to imperfections in client-provided datasets. Applicants should indicate that they are comfortable working in environments where such incidental exposure is a possibility.

What We Offer

Opportunities to shape the evaluation standards for next-generation multimodal AI systems.

Innovative and supportive global working environment.

Competitive compensation and flexible remote working arrangements.

Continuous learning and growth in applied AI evaluation.

Join iMerit to help shape the benchmarks and standards that ensure AI systems are accurate, safe, and culturally aware across text, vision, and multimodal applications. If rigorous analysis and innovation excite you, we encourage you to apply!

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.