Aktiviere Job-Benachrichtigungen per E-Mail!

Red Teaming Prompt Writer – Cultural Awareness

Innodata Inc.

Berlin

Vor Ort

EUR 40.000 - 60.000

Teilzeit

Gestern
Sei unter den ersten Bewerbenden

Zusammenfassung

A leading data engineering company in Berlin is looking for an AI Red Teaming & Prompt Evaluation Specialist. You will conduct evaluations of AI-generated content, identifying vulnerabilities and ensuring compliance with quality standards. Candidates should have proven experience in AI red teaming and strong analytical skills. This part-time role requires independent work and adherence to detailed protocols, with an hourly commitment of 5-6 hours.

Qualifikationen

  • Proven experience in AI red teaming or adversarial prompt design.
  • Familiarity with prompt engineering and ethical considerations in generative AI.
  • Strong background in content review for AI/ML systems.

Aufgaben

  • Conduct Red Teaming exercises for LLM outputs.
  • Evaluate AI prompts to uncover failure modes.
  • Document findings and vulnerability reports.

Kenntnisse

AI red teaming
NLP tasks
Quality Assurance
Pattern recognition
Analytical writing
Jobbeschreibung
Overview

Innodata (NASDAQ: INOD) is a leading data engineering company. With more than 2,000 customers and operations in 13 cities around the world, we are an AI technology solutions provider-of-choice for 4 out of 5 of the world’s biggest technology companies, as well as leading companies across financial services, insurance, technology, law, and medicine.

By combining advanced machine learning and artificial intelligence (ML / AI) technologies, a global workforce of subject matter experts, and a high-security infrastructure, we’re helping usher in the promise of AI. Innodata offers a powerful combination of both digital data solutions and easy-to-use, high-quality platforms.

Our global workforce includes over 5,000 employees in the United States, Canada, United Kingdom, the Philippines, India, Sri Lanka, Israel and Germany.

We are seeking highly analytical and detail-oriented professionals with hands-on experience in Red Teaming, Prompt Evaluation, and AI / LLM Quality Assurance. The ideal candidate will help us rigorously test and evaluate AI-generated content to identify vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards.

Job Title: AI Red Teaming & Prompt Evaluation Specialist

Hourly Commitment: 5-6 hours

Responsibilities
  • Conduct Red Teaming exercises to identify adversarial, harmful, or unsafe outputs from large language models (LLMs).
  • Evaluate and stress-test AI prompts across multiple domains (e.g., finance, healthcare, security) to uncover potential failure modes.
  • Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
  • Collaborate with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.
  • Perform manual QA and content validation across model versions, ensuring factual consistency, coherence, and guideline adherence.
  • Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
  • Document findings, edge cases, and vulnerability reports with high clarity and structure.
Qualifications
  • Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
  • Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
  • Strong background in Quality Assurance, content review, or test case development for AI / ML systems.
  • Understanding of LLM behaviours, failure modes, and model evaluation metrics.
  • Excellent critical thinking, pattern recognition, and analytical writing skills.
  • Ability to work independently, follow detailed evaluation protocols, and meet tight deadlines.
Hol dir deinen kostenlosen, vertraulichen Lebenslauf-Check.
eine PDF-, DOC-, DOCX-, ODT- oder PAGES-Datei bis zu 5 MB per Drag & Drop ablegen.