Attiva gli avvisi di lavoro via e-mail!

Research Engineer Position on Secure Agentic AI Systems

The Italian Institute of Artificial Intelligence for Industry (AI4I)

Piemonte

In loco

EUR 30.000 - 50.000

Tempo pieno

Oggi
Candidati tra i primi

Descrizione del lavoro

A leading institute for AI research in Torino is seeking a Research Engineer to develop secure AI solutions. The role involves designing security platforms, conducting red-team exercises, and deploying defenses against AI-specific threats. Ideal candidates hold a Master’s or PhD in a related field and possess strong programming skills in Python and additional languages. This position offers a chance to work in an innovative environment with opportunities for career advancement.

Servizi

Competitive compensation packages
Support for conference travel
Access to high-performance computing infrastructure

Competenze

  • Hands-on experience with modern ML frameworks for deploying and securing AI workloads.
  • Strong programming skills in Python and proficiency in at least one additional language.
  • Background in optimizing AI model serving infrastructure and deploying models securely.

Mansioni

  • Design and implement scalable security platforms for AI agents.
  • Conduct red-team exercises to identify vulnerabilities in AI solutions.
  • Develop defenses against LLM-specific threats.

Conoscenze

Python
C++
Docker
Kubernetes
Rust
JavaScript

Formazione

Master’s or PhD in Computer Science, Engineering, or related field

Strumenti

PyTorch
Hugging Face
JAX
TensorFlow
Descrizione del lavoro

The AI Security Lab is looking for a creative and highly motivated Research Engineer to join our founding team and help build the next generation of secure agentic AI systems through practical implementation of cutting‑edge security solutions.

The Role

As a research engineer, you will be instrumental in designing and implementing our end‑to‑end security platform that enables secure AI deployment at scale. This position offers the unique opportunity to architect secure AI solutions from first principles, translating theoretical security concepts into production‑ready systems. Your work will focus on creating foundational infrastructure for AI red‑teaming, secure agent execution environments, verification protocols, and continuous monitoring frameworks that protect AI systems during runtime and safeguard data throughout processing, storage, and transfer. Working alongside security researchers and engineers, you’ll bridge the gap between frontier research and practical deployment, ensuring that advanced AI agents can operate securely in real‑world environments.

Key Responsibilities
  • Design and implement scalable security platforms for AI agents and large language model workloads, including secure execution environments and runtime protection mechanisms.
  • Conduct proactive red‑team exercises simulating external adversaries and insider threats to identify and remediate vulnerabilities in agentic AI solutions.
  • Develop and deploy defenses against LLM‑specific threats including prompt injection, task hijacking, model extraction, and data leakage attacks.
  • Build security validation frameworks and compliance certification tools to support secure system deployments for internal teams and pilot partners.
  • Collaborate with research teams to translate novel security findings into production‑ready implementations and open‑source security tools.
Minimum Qualifications
  • Master’s or PhD in Computer Science, Engineering, or a related field with focus on security, systems, or machine learning.
  • Hands‑on experience with modern ML frameworks including PyTorch, Hugging Face, JAX, or TensorFlow for deploying and securing AI workloads.
  • Strong programming skills in Python and proficiency in at least one additional language such as C++, Rust, or JavaScript/Typescript.
Preferred Qualifications
  • Experience conducting penetration testing, vulnerability assessments, security architecture reviews, or threat modeling for complex systems.
  • Expertise with trusted execution environments (TEEs), containerization technologies (Docker/Kubernetes), CI/CD pipelines, and cloud platforms (GCP/AWS/Azure).
  • Background in optimizing AI model serving infrastructure, scaling inference workloads, or deploying models in production with security considerations.
  • Deep knowledge of AI‑specific security threats including prompt injection attacks, LLM red‑teaming methodologies, jailbreaking techniques, and privacy‑preserving ML methods.
  • Experience with GPU cluster management and orchestration for secure AI workload deployment.
What We Offer
  • A pioneering research team: You will work alongside a highly talented and collaborative team of security researchers and engineers who share your passion for advancing AI safety and security. We foster an environment of innovation and mutual support, with clear pathways for career advancement and technical leadership.
  • Research impact and visibility: We are committed to advancing both practical security solutions and fundamental research. You will have opportunities to publish at top‑tier venues, while also contributing to national and European industrial research initiatives that shape the future of secure AI.
  • Prime location at OGR Torino: Our offices are situated at OGR Torino, the city’s leading technology and innovation hub. You’ll be immersed in Italy’s vibrant tech ecosystem with access to countless events, meetups, and a dynamic community of innovators and entrepreneurs.
  • Comprehensive support and resources: We provide competitive compensation packages and full support for conference travel and professional development. You’ll have access to state‑of‑the‑art high‑performance computing infrastructure and GPU clusters essential for conducting cutting‑edge AI security research.
  • Salary range: 30000€ – 50000€ plus bonus, gross per year, depending on experience.

If you’re passionate about shaping the future of AI security and want to see your research protect the next generation of AI systems, we’d love to hear from you. Let’s build secure AI together!

Start Date: Flexible, as soon as possible.

Application Requirements
  • Cover letter (max. 1 page) describing how your background aligns with this specific position and outlining your research interests and professional goals in AI security.
  • CV including your publication record and links to open‑source contributions, code repositories (e.g., GitHub), or research prototypes.
Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.