Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

Code Reviewer for LLM Data Training (R)

G2i Inc.

Teletrabalho

BRL 80.000 - 120.000

Tempo parcial

Ontem
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A flexible remote AI training company is seeking a Code Reviewer to oversee AI-generated R code evaluations. Your task will involve auditing annotator evaluations, ensuring adherence to quality guidelines, and providing feedback to maintain standards. An ideal candidate will have 5-7+ years in R development, possess strong debugging skills, and have excellent communication abilities. This role offers personalized hourly rates based on experience and expertise, ensuring fair compensation for your contributions.

Qualificações

  • 5–7+ years of experience in R development, QA, or code review.
  • Comfortable using code execution environments and testing tools.
  • Experience working with structured QA or annotation workflows.

Responsabilidades

  • Review and audit annotator evaluations of AI‑generated R code.
  • Assess if the R code follows the prompt instructions and is functionally correct.
  • Provide constructive feedback to maintain high annotation standards.

Conhecimentos

R syntax
Debugging
Testing
Excellent written communication
Structured QA workflows
Descrição da oferta de emprego
10‑min AI interview, project starts Jan 29, rare languages = higher placement rates
About the Company

G2i connects subject‑matter experts, students, and professionals with flexible, remote AI training opportunities, including annotation, evaluation, fact‑checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.

About the Role

We’re hiring a Code Reviewer with deep R expertise to review evaluations completed by data annotators assessing AI‑generated R code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction‑following, factual correctness, and code functionality.

  • Review and audit annotator evaluations of AI‑generated R code.
  • Assess if the R code follows the prompt instructions, is functionally correct, and secure.
  • Validate code snippets using proof‑of‑work methodology.
  • Identify inaccuracies in annotator ratings or explanations.
  • Provide constructive feedback to maintain high annotation standards.
  • Work within Project Atlas guidelines for evaluation integrity and consistency.
Required Qualifications
  • 5–7+ years of experience in R development, QA, or code review.
  • Strong knowledge of R syntax, debugging, edge cases, and testing.
  • Comfortable using code execution environments and testing tools.
  • Excellent written communication and documentation skills.
  • Experience working with structured QA or annotation workflows.
  • English proficiency at B2, C1, C2, or Native level.
Preferred Qualifications
  • Experience in AI training, LLM evaluation, or model alignment.
  • Familiarity with annotation platforms.
  • Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Compensation

Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You’ll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re‑evaluated for different projects based on your performance and experience.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.