Ativa os alertas de emprego por e-mail!

Content Writer

Innodata Inc.

Belém

Presencial

BRL 20.000 - 80.000

Tempo integral

Hoje
Torna-te num dos primeiros candidatos

Resumo da oferta

A leading technology firm based in Brazil is seeking an analytical professional specializing in AI red teaming and quality assurance. You will evaluate AI-generated content, identify vulnerabilities, and ensure compliance with safety standards. The ideal candidate has proven experience in AI safety testing and excellent analytical writing skills. Join us in shaping the future of AI responsibly.

Qualificações

  • Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
  • Strong background in Quality Assurance or test case development for AI/ML systems.
  • Excellent critical thinking, pattern recognition, and analytical writing skills.

Responsabilidades

  • Conduct Red Teaming exercises for LLM output safety.
  • Evaluate AI prompts to identify potential risks and failures.
  • Collaborate with data scientists and safety researchers.

Conhecimentos

AI red teaming
Quality Assurance
Prompt engineering
Analytical writing
Critical thinking
Descrição da oferta de emprego

We are seeking highly analytical and detail-oriented professionals with hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance.

The ideal candidate will help us rigorously test and evaluate AI-generated content to identify vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards.

Key Responsibilities
  • Conduct Red Teaming exercises to identify adversarial, harmful, or unsafe outputs from large language models (LLMs).
  • Evaluate and stress-test AI prompts across multiple domains (e.g., finance, healthcare, security) to uncover potential failure modes.
  • Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
  • Collaborate with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.
  • Perform manual QA and content validation across model versions, ensuring factual consistency, coherence, and guideline adherence.
  • Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
  • Document findings, edge cases, and vulnerability reports with high clarity and structure.
Requirements
  • Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
  • Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
  • Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
  • Understanding of LLM behaviors, failure modes, and model evaluation metrics.
  • Excellent critical thinking, pattern recognition, and analytical writing skills.
  • Ability to work independently, follow detailed evaluation protocols, and meet tight deadlines.
Preferred Qualifications
  • Prior work with teams like OpenAI, Anthropic, Google DeepMind, or other LLM safety initiatives.
  • Experience in risk assessment, red team security testing, or AI policy & governance.
  • Background in linguistics, psychology, or computational ethics is a plus.

We are an equal opportunities employer and welcome applications from all qualified candidates.

Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.