Aktiviere Job-Benachrichtigungen per E-Mail!
Erhöhe deine Chancen auf ein Interview
A leading company is seeking a contractor for a unique role in evaluating AI-generated code. You'll collaborate with industry experts, reviewing and ranking model-generated code, ensuring quality and maintainability. Ideal candidates will have 7+ years of software engineering experience and excellent assessment skills. This is a part-time position with the opportunity for extension based on performance.
Job Description
About Us
Turing is one of the world’s fastest-growing AI companies, pushing the boundaries of AI-assisted software development. Our mission is to empower the next of AI systems to reason about and work with real-world software repositories. You’ll be working at the intersection of software engineering, open-source ecosystems, and frontier AI.
Project Overview
We're building high-quality evaluation and training datasets to improve how Large Models (LLMs) interact with realistic software engineering tasks. A key focus of this project is curating verifiable software engineering challenges from public GitHub repository histories using a human-in-the-loop process.
Why This Role Is Unique
Required Skills
Bonus Points
Engagement Details