Job Search and Career Advice Platform

Ativa os alertas de emprego por e-mail!

AI Engineer

PepsiCo

Vila Velha

Presencial

BRL 120.000 - 160.000

Tempo integral

Há 2 dias
Torna-te num dos primeiros candidatos

Cria um currículo personalizado em poucos minutos

Consegue uma entrevista e ganha mais. Sabe mais

Resumo da oferta

A global food and beverage company is seeking a Senior Data Scientist/AI Engineer in Brazil to lead AI safety evaluations for chatbot systems. The role involves adversarial testing, risk assessments, and developing safety strategies. Candidates should have a master's degree and extensive experience in machine learning, particularly in LLM applications. Strong communication skills and familiarity with deep learning frameworks like PyTorch or TensorFlow are essential. This position promises high impact and influences across the organization.

Qualificações

  • 4+ years of experience developing or evaluating machine learning systems, including LLM or NLP applications.
  • Experience conducting model evaluations, experimentation, or reliability testing.
  • Experience with cloud platforms (AWS, Azure, GCP) and MLOps workflows.

Responsabilidades

  • Lead adversarial testing, including prompt injection and harmful content generation.
  • Conduct risk assessments for AI-driven chatbots and autonomous systems.
  • Develop reproducible experiments for LLM behavior analysis.

Conhecimentos

Machine Learning systems development
Generative AI knowledge
Python proficiency
Deep learning frameworks experience
Strong communication skills

Formação académica

Master’s degree in Computer Science, Data Science, or related field

Ferramentas

PyTorch
TensorFlow
Descrição da oferta de emprego

We are seeking a Senior Data Scientist/AI Engineer specializing in AI Safety (to be located either in Vitoria -Basque Country- or Barcelona) to lead adversarial testing, risk assessment, and safety evaluations for LLM- and agent-powered chatbot systems. This role focuses on ensuring that AI technologies are safe, reliable, and aligned with business and user needs across high impact use cases.

You will join a collaborative interdisciplinary team to design, evaluate, and harden AI/ML systems against misuse, failures, and emerging risks. You will work closely with product owners, engineering teams, and business stakeholders to identify safety requirements, conduct adversarial assessments, and develop robust mitigation strategies. This role is highly technical and safety critical, with broad visibility and influence across the organization.

Responsibilities
  • AI Safety, Robustness & Risk Assessment – Lead adversarial testing, including jailbreak attempts, prompt injection, harmful content generation, system prompt extraction, and agent tool misuse.
  • AI Safety, Robustness & Risk Assessment – Conduct end to end risk assessments for AI driven chatbots and autonomous agent systems, identifying hazards, evaluating exposure, and defining mitigation strategies.
  • AI Safety, Robustness & Risk Assessment – Build and maintain AI safety evaluation pipelines, including red team test suites, scenario-based evaluations, and automated stress testing.
  • AI Safety, Robustness & Risk Assessment – Define and monitor safety KPIs such as harmful output rates, robustness scores, and model resilience metrics.
  • AI Safety, Robustness & Risk Assessment – Analyze failure modes (e.g., hallucinations, deceptive reasoning, unsafe tool execution) and design guardrails to minimize risks.
Technical Development & Collaboration
  • Develop reproducible experiments for LLM behavior analysis, including prompt engineering, control mechanisms, and guardrail testing.
  • Partner with data engineers and MLOps teams to integrate safety evaluations into CI/CD pipelines.
  • Work with product teams to translate safety requirements into actionable technical specifications.
  • Support model governance, including documentation, safety reports, and compliance with internal and external standards.
  • Contribute to innovation and research around emerging safety methodologies for LLMs and agent architectures.
Knowledge Sharing & Leadership
  • Serve as an internal expert on AI safety best practices, adversarial testing methodologies, and robust system design.
  • Provide guidance and mentorship to data scientists, engineers, and product partners on safe AI development.
  • Create high-quality documentation, playbooks, and reusable tools for safety evaluations.
Qualifications
  • Master’s degree in Computer Science, Data Science, Machine Learning, or related quantitative field.
  • 4+ years of experience developing or evaluating machine learning systems, including LLM- or NLP-based applications.
  • Strong knowledge of Generative AI and Transformer-based models.
  • Experience with at least one deep learning framework (PyTorch, TensorFlow).
  • Proficiency with Python and common data/ML libraries.
  • Experience conducting model evaluations, experimentation, or reliability testing.
  • Clear communication skills and the ability to translate technical findings into business relevant insights.
Preferred Qualifications
  • Experience with adversarial ML, red teaming, or AI safety research.
  • Familiarity with safety testing frameworks such as automated red-teamers, harmful content classifiers, or jailbreak detection systems.
  • Hands-on experience with LLM agents, tool-use orchestration, or autonomous systems.
  • Knowledge of risk management frameworks (e.g., NIST AI RMF, ISO 42001) and Responsible AI principles.
  • Experience designing safety guardrails, moderation layers, or policy enforcement mechanisms.
  • Background in reinforcement learning or agent evaluation.
  • Experience with cloud platforms (AWS, Azure, GCP) and MLOps workflows.
Obtém a tua avaliação gratuita e confidencial do currículo.
ou arrasta um ficheiro em formato PDF, DOC, DOCX, ODT ou PAGES até 5 MB.