
Ativa os alertas de emprego por e-mail!
Cria um currículo personalizado em poucos minutos
Consegue uma entrevista e ganha mais. Sabe mais
A tech consulting company is seeking a Code Reviewer with extensive experience in Go development. This role involves reviewing evaluations of AI-generated Go code to ensure adherence to quality standards and accuracy. Candidates should have at least 5 years of experience in Go, coupled with strong debugging and communication skills. Compensation is personalized based on factors like experience and location. This is a remote position based in Brazil, offering flexibility and competitive pay.
G2i connects subject‑matter experts, students, and professionals with flexible, remote AI training opportunities, including annotation, evaluation, fact‑checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.
We're hiring a Code Reviewer with deep Go expertise to review evaluations completed by data annotators assessing AI-generated Go code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction‑following, factual correctness, and code functionality.
Review and audit annotator evaluations of AI-generated Go code.
Assess if the Go code follows the prompt instructions, is functionally correct, and secure.
Validate code snippets using proof‑of‑work methodology.
Identify inaccuracies in annotator ratings or explanations.
Provide constructive feedback to maintain high annotation standards.
Work within Project Atlas guidelines for evaluation integrity and consistency.
5–7+ years of experience in Go development, QA, or code review.
Strong knowledge of Go syntax, concurrency patterns, debugging, edge cases, and testing.
Experience with Go modules, testing frameworks, and standard tooling.
Comfortable using code execution environments and debugging tools.
Excellent written communication and documentation skills.
Experience working with structured QA or annotation workflows.
English proficiency at B2, C1, C2, or Native level.
Experience in AI training, LLM evaluation, or model alignment.
Familiarity with annotation platforms.
Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Background in microservices architecture or cloud‑native development.
Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You'll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re‑evaluated for different projects based on your performance and experience.