Attiva gli avvisi di lavoro via e-mail!

Research Engineer Position on Secure Agentic AI Systems

AI4I Foundation

Torino

In loco

EUR 40.000 - 70.000

Tempo pieno

Oggi
Candidati tra i primi

Genera un CV personalizzato in pochi minuti

Ottieni un colloquio e una retribuzione più elevata. Scopri di più

Descrizione del lavoro

A leading AI research institute in Torino seeks a Research Engineer to design secure AI systems. You will implement cutting-edge security solutions, conduct proactive red-team exercises, and develop frameworks for deployment validation. Applicants should hold an advanced degree in relevant fields and have hands-on experience with ML frameworks. Enjoy opportunities for publication and impact in a vibrant tech hub.

Servizi

Competitive compensation packages
Full support for conference travel
Access to high-performance computing infrastructure

Competenze

  • Master's or PhD focused on security, systems, or machine learning.
  • Hands-on experience with contemporary ML frameworks for securing AI workloads.

Mansioni

  • Design scalable security platforms for AI agents.
  • Conduct red-team exercises to identify vulnerabilities.
  • Develop defenses against LLM threats.
  • Build security validation frameworks.
  • Collaborate with research teams for implementations.

Conoscenze

Hands-on experience with ML frameworks
Understanding of security solutions
Proactive red-team exercises
Developing security validation frameworks
Collaboration with research teams

Formazione

Master’s or PhD in Computer Science or related

Strumenti

PyTorch
Hugging Face
Docker
Kubernetes
GCP/AWS/Azure
Descrizione del lavoro

Deadline:

December 14, 2025, 11:59 PM CEST

The AI Security Lab is looking for a creative and highly motivated Research Engineer to join our founding team and help build the next generation of secure agentic AI systems through practical implementation of cutting‑edge security solutions.

The Role

As a research engineer, you will be instrumental in designing and implementing our end‑to‑end security platform that enables secure AI deployment at scale. This position offers the unique opportunity to architect secure AI solutions from first principles, translating theoretical security concepts into production‑ready systems. Your work will focus on creating foundational infrastructure for AI red‑teaming, secure agent execution environments, verification protocols, and continuous monitoring frameworks that protect AI systems during runtime and safeguard data throughout processing, storage, and transfer. Working alongside security researchers and engineers, you’ll bridge the gap between frontier research and practical deployment, ensuring that advanced AI agents can operate securely in real‑world environments.

Key Responsibilities
  • Design and implement scalable security platforms for AI agents and large language model workloads, including secure execution environments and runtime protection mechanisms.
  • Conduct proactive red‑team exercises simulating external adversaries and insider threats to identify and remediate vulnerabilities in agentic AI solutions.
  • Develop and deploy defenses against LLM‑specific threats including prompt injection, task hijacking, model extraction, and data leakage attacks.
  • Build security validation frameworks and compliance certification tools to support secure system deployments for internal teams and pilot partners.
  • Collaborate with research teams to translate novel security findings into production‑ready implementations and open‑source security tools.
Minimum Qualifications
  • Master’s or PhD in Computer Science, Engineering, or a related field with focus on security, systems, or machine learning.
  • Hands‑on experience with modern ML frameworks including PyTorch, Hugging Face, JAX, or TensorFlow for deploying and securing AI workloads.
Preferred Qualifications
  • Experience conducting penetration testing, vulnerability assessments, security architecture reviews, or threat modelling for complex systems.
  • Expertise with trusted execution environments (TEEs), containerisation technologies (Docker/Kubernetes), CI/CD pipelines, and cloud platforms (GCP/AWS/Azure).
  • Background in optimizing AI model serving infrastructure, scaling inference workloads, or deploying models in production with security considerations.
  • Deep knowledge of AI‑specific security threats including prompt injection attacks, LLM red‑teaming methodologies, jailbreaking techniques, and privacy‑preserving ML methods.
  • Experience with GPU cluster management and orchestration for secure AI workload deployment.
What We Offer
  • A pioneering research team, you will work alongside a highly talented and collaborative team of security researchers and engineers who share your passion for advancing AI safety and security. We foster an environment of innovation and mutual support, with clear pathways for career advancement and technical leadership.
  • Research impact and visibility, we are committed to advancing both practical security solutions and fundamental research. You will have opportunities to publish at top‑tier venues, while also contributing to national and European industrial research initiatives that shape the future of secure AI.
  • Prime location at OGR Torino, our offices are situated at OGR Torino, the city’s leading technology and innovation hub. You’ll be immersed in Italy’s vibrant tech ecosystem with access to countless events, meetups, and a dynamic community of innovators and entrepreneurs.
  • Comprehensive support and resources, we provide competitive compensation packages and full support for conference travel and professional development. You’ll have access to state‑of‑the‑art high‑performance computing infrastructure and GPU clusters essential for conducting cutting‑edge AI security research.

If you’re passionate about shaping the future of AI security and want to see your research protect the next generation of AI systems, we’d love to hear from you. Let’s build secure AI together!

Start Date

Flexible, as soon as possible.

Application Requirements
  • Cover letter (max. 1 page) describing how your background aligns with this specific position and outlining your research interests and professional goals in AI security.
  • CV including your publication record and links to open‑source contributions, code repositories (e.g., GitHub), or research prototypes.
ABOUT US

AI4I – THE ITALIAN RESEARCH INSTITUTE FOR ARTIFICIAL INTELLIGENCE FOR INDUSTRIAL IMPACT

AI4I has been founded to perform transformative, application‑oriented research in Artificial Intelligence.

AI4I is set to engage and empower gifted, entrepreneurial, young researchers who commit to producing an impact at the intersection of science, innovation, and industrial transformation.

Highly competitive pay, bonus incentives, access to dedicated high‑performance computing, state‑of‑the‑art laboratories, industrial collaborations, and an ecosystem tailored to support the initiation and growth of startups stand out as some of the distinctive features of AI4I, bringing together people in a dynamic international environment.

AI4I is an Institute that aims to enhance scientific research, technological transfer, and, more generally, the innovation capacity of the Country, promoting its positive impact on industry, services, and public administration. To this end, the Institute contributes to creating a research and innovation infrastructure that employs artificial intelligence methods, with particular reference to manufacturing processes, within the framework of the Industry 4.0 process and its entire value chain. The Institute establishes relationships with similar entities and organisations in Italy and abroad, including Competence Centres and European Digital Innovation Hubs (EDIHs), so that the centre may become an attractive place for researchers, companies, and start‑ups.

AI4I
The Italian Institute of Artificial Intelligence for Industry
Corso Castelfidardo 22, 10129 Torino
Codice fiscale 97904430010

Ottieni la revisione del curriculum gratis e riservata.
oppure trascina qui un file PDF, DOC, DOCX, ODT o PAGES di non oltre 5 MB.