Enable job alerts via email!

Remote Senior Software Engineer (LLM) - 34953

ZipRecruiter

Altrincham

Hybrid

GBP 80,000 - 100,000

Part time

11 days ago

Generate a tailored resume in minutes

Land an interview and earn more. Learn more

Start fresh or import an existing resume

Job summary

A leading company in AI development is seeking a software engineering contractor to evaluate AI-generated code. This role involves reviewing model responses and ensuring high-quality coding standards for real-world applications. Ideal candidates will have extensive experience in software engineering and strong communication skills.

Qualifications

  • 7+ years of professional software engineering experience.
  • Strong fundamentals in software design and coding best practices.
  • Experience with LLM-generated code evaluation is a plus.

Responsibilities

  • Review and rank model-generated code responses.
  • Evaluate code diffs for correctness and quality.
  • Collaborate with the team on edge cases and ambiguities.

Skills

Software design
Coding best practices
Debugging
Code quality assessment
Proficient with code review
Written communication

Job description

Job Description

About Us

Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI.

Project Overview

We're building high-quality evaluation and training datasets to improve how Large Models (LLMs) interact with realistic software engineering tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process.

Why This Role Is Unique

  • Collaborate directly with AI researchers shaping the future of AI-powered software development.
  • Work with high-impact open-source projects and evaluate how LLMs perform on real bugs, issues, and developer tasks.
  • Influence dataset design that will train and benchmark next-gen LLMs.
  • What does day-to-day look like:
  • Review and compare 3–4 model-generated code responses for each task using a structured ranking system.
  • Evaluate code diffs for correctness, code quality, style, and efficiency.
  • Provide clear, detailed rationales explaining the reasoning behind each ranking decision.
  • Maintain high consistency and objectivity across evaluations.
  • Collaborate with the team to identify edge cases and ambiguities in model behavior.

Required Skills

  • 7+ years of professional software engineering experience, ideally at top-tier product companies (e.g., Stripe, Datadog, Snowflake, Dropbox, Canva, Shopify,Intuit,PayPal, Research at IBM/GE/Honewell/Scheinder etc. ).
  • Strong fundamentals in software design, coding best practices, and debugging.
  • Excellent ability to assess code quality, correctness, and maintainability.
  • Proficient with code review processes and reading diffs in real-world repositories.
  • Exceptional written communication skills to articulate evaluation rationale clearly.
  • Prior experience with LLM-generated code or evaluation work is a plus.

Bonus Points

  • Experience in LLM research, developer agents, or AI evaluation projects.
  • Background in building or scaling developer tools or automation systems.

Engagement Details

  • Commitment: ~20 hours/week (partial PST overlap required)
  • Type: Contractor (no medical/paid leave)
  • Duration: 1 month (starting next week; potential extensions based on performance and fit)
  • Rates: $40–$100/hour, based on experience and skill level.
Get your free, confidential resume review.
or drag and drop a PDF, DOC, DOCX, ODT, or PAGES file up to 5MB.