
Enable job alerts via email!
Generate a tailored resume in minutes
Land an interview and earn more. Learn more
A flexible remote AI training company is seeking a Code Reviewer with deep Go expertise. In this role, you will review evaluations of AI-generated Go code, ensuring adherence to quality guidelines. Candidates should possess 5–7+ years of experience in Go development, strong knowledge of Go syntax, and excellent written communication skills. Compensation is hourly and personalized based on experience and background, ensuring fair rates based on the project. Join us to contribute to high-quality AI training.
G2i connects subject‑matter experts, students, and professionals with flexible, remote AI training opportunities, including annotation, evaluation, fact‑checking, and content review. We partner with leading AI teams, and all contributions are paid weekly once approved, ensuring consistent and reliable compensation.
We're hiring a Code Reviewer with deep Go expertise to review evaluations completed by data annotators assessing AI-generated Go code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction‑following, factual correctness, and code functionality.
Review and audit annotator evaluations of AI-generated Go code.
Assess if the Go code follows the prompt instructions, is functionally correct, and secure.
Validate code snippets using proof‑of‑work methodology.
Identify inaccuracies in annotator ratings or explanations.
Provide constructive feedback to maintain high annotation standards.
Work within Project Atlas guidelines for evaluation integrity and consistency.
5–7+ years of experience in Go development, QA, or code review.
Strong knowledge of Go syntax, concurrency patterns, debugging, edge cases, and testing.
Experience with Go modules, testing frameworks, and standard tooling.
Comfortable using code execution environments and debugging tools.
Excellent written communication and documentation skills.
Experience working with structured QA or annotation workflows.
English proficiency at B2, C1, C2, or Native level.
Experience in AI training, LLM evaluation, or model alignment.
Familiarity with annotation platforms.
Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines.
Background in microservices architecture or cloud‑native development.
Hourly rates are personalized based on your experience level, educational background, location, and industry expertise. You'll see your specific rate in your contract offer before signing. Rates for technical roles can vary significantly based on these factors and can be re‑evaluated for different projects based on your performance and experience.