Enable job alerts via email!
A technology company is seeking LLM testers to engage in a red teaming project focused on evaluating large language models. The role involves crafting prompts, documenting outcomes, and working independently. Ideal candidates are doctoral students or recent graduates with practical LLM experience and a strong creative mindset. This is a part-time, flexible remote position.
You’ll be part of a red teaming project focused on probing large language models for failure modes and harmful outputs. Your work will involve crafting prompts and scenarios to test model guardrails, exploring creative ways to bypass restrictions, and systematically documenting outcomes. You’ll think like an adversary to uncover weaknesses, while collaborating with engineers and safety researchers to share findings and improve system defenses.
This program is open to U.S.-based doctoral students, candidates, and recent graduates with valid work or training authorization (e.g., F-1/OPT, J-1, H-1B). Participants are responsible for ensuring compliance with their visa conditions and confirming eligibility with their program or visa sponsor prior to applying.