Job Search and Career Advice Platform

Enable job alerts via email!

Coding Expert Raters UK

TransPerfect

Remote

GBP 50,000 - 70,000

Full time

Today
Be an early applicant

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Job summary

A leading AI solutions provider is seeking a Quality Assessment Freelancer to evaluate AI model responses and provide feedback. The ideal candidate is proficient in Python and experienced in web technologies and machine learning. Responsibilities include code annotation, validation of datasets, and collaboration with engineers. This is a remote full-time position offering the opportunity to work on innovative projects in AI and data science.

Qualifications

  • Proficient in Python and at least one additional programming language.
  • Experience with web scraping and APIs.
  • Knowledge of ML model development and deep learning frameworks.

Responsibilities

  • Evaluate the quality of AI model responses.
  • Generate and annotate code snippets according to project guidelines.
  • Provide feedback on model outputs.

Skills

Proficient in Python
Web technologies (HTML/CSS/JavaScript)
Machine Learning
Data analysis and visualization
Familiarity with Git/GitHub

Tools

TensorFlow
PyTorch
SQL
NoSQL
Job description

Job description

Work Location : UK Remote

Engagement Model : Freelancer / Independent Contractor

Start Date : ASAP

Qualification Requirements : Successful completion of a role‑specific written assessment and a background check.

Main Responsibilities
  • Model Quality Assessment: Evaluate the quality of AI model responses that include code, machine learning, AI, identifying errors, inefficiencies and non‑compliance with established standards.
  • Code Annotation and Labeling: Accurately generate, annotate and label code snippets, algorithms and technical documentation according to project‑specific guidelines.
  • Review and Feedback: Provide detailed constructive feedback on model and other outputs.
  • Comparative Analysis: Compare multiple outputs and rank them based on criteria such as correctness, efficiency, readability and adherence to programming best practices.
  • Data Validation: Validate and correct datasets to ensure high‑quality data for model training and evaluation.
  • Collaboration: Work closely with data scientists and engineers to identify new annotation guidelines, resolve ambiguities and contribute to the overall project strategy.
Requirements
  • Programming: Proficient in Python (required) and at least one additional language such as JavaScript, TypeScript, C / C++ or Rust. Bonus for experience with Go, Swift, Ruby, PHP or Kotlin.
  • Web Technologies: Experience with web scraping, APIs, HTML / CSS / JavaScript and both frontend (React) and backend (Flask) development.
  • Machine Learning & AI: Knowledge of ML model development, deep learning frameworks (TensorFlow, PyTorch), NLP, reinforcement learning, computer vision and game AI.
  • Data Science & Engineering: Skilled in data analysis, visualization (Pandas, Matplotlib, NumPy) and database management (SQL, NoSQL).
  • Algorithms & Math: Understanding of general and specialized algorithms, optimization and problem‑solving techniques.
  • Software Engineering Practices: Familiarity with Git / GitHub, clean coding principles, software design patterns and debugging.
Preferred Qualifications
  • Experience with AI / ML concepts, particularly with large language models (LLMs) and code generation.
  • Familiarity with various programming paradigms (e.g., object‑oriented, functional).
  • Experience with code review in a professional or academic setting.
  • Experience in data annotation or similar quality assurance roles.

Employment Type : Full-Time

Experience : years

Vacancy : 1

Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.