Job Search and Career Advice Platform

Activez les alertes d’offres d’emploi par e-mail !

DevSecMLOps: Security-by-Design for Trustworthy Machine Learning Pipelines

IRIT, Université de Toulouse

France

Sur place

EUR 25 000 - 35 000

Plein temps

Il y a 30+ jours

Générez un CV personnalisé en quelques minutes

Décrochez un entretien et gagnez plus. En savoir plus

Résumé du poste

A leading research institution in France seeks a PhD candidate to investigate security in MLOps. The role involves analyzing vulnerabilities in machine learning lifecycles, designing tailored security mechanisms, and conducting case studies with industrial partners. Candidates should hold a Master's degree and have skills in MLOps, software engineering, and cybersecurity.

Qualifications

  • Master’s students or professionals with a Master’s degree.
  • Strong understanding of MLOps practices.
  • Experience with software security principles.

Responsabilités

  • Conduct a comprehensive study of vulnerabilities across ML lifecycles.
  • Design security-by-design mechanisms tailored to ML workflows.
  • Validate proposed solutions through industrial case studies.

Connaissances

MLOps
Software engineering
Cybersecurity

Formation

Master's degree
Description du poste

Organisation/Company IRIT, Université de Toulouse Research Field Computer science » Informatics Researcher Profile Recognised Researcher (R2) Leading Researcher (R4) First Stage Researcher (R1) Established Researcher (R3) Country France Application Deadline 29 Sep 2026 - 22:00 (UTC) Type of Contract Temporary Job Status Full-time Offer Starting Date 1 Oct 2026 Is the job funded through the EU Research Framework Programme? Not funded by a EU programme Is the Job related to staff position within a Research Infrastructure? No

Offer Description

Context

Machine Learning Operations (MLOps) has become essential to managing the lifecycle of machine learning (ML) models, enabling continuous delivery, automation, and reproducibility. However, the rapid adoption of MLOps has advanced more quickly than the integration of robust security practices. Traditional software security practices—such as static analysis, dynamic scans, and vulnerability assessments—are well established, but ML pipelines present additional unique security concerns [1] [2]. For instance, ML systems face risks like adversarial attacks, model poisoning, training data compromise, drift, and injection attacks [3]. Additionally, privacy and compliance challenges—such as protecting personally identifiable information (PII) during data ingestion and model training—introduce further complexity that traditional security methods often overlook [4]. This suggests that machine learning models require security controls tailored to their lifecycle, from data collection to training, deployment, and monitoring. Current MLOps practices lack comprehensive built-in security mechanisms tailored to ML-specific risks and are fragmented: they either target specific threats, lack end-to-end traceability across the pipeline, or introduce prohibitive overhead that undermines the agility promised by MLOps. This has given rise to the emerging field of DevSecMLOps, which aims to extend the principles of DevSecOps [5, 6] to machine learning systems, ensuring both agility and security in AI-based applications.

The core problem is therefore the absence of a unified, systematic, and pipeline-wide approach to integrate security-by-design into MLOps pipelines. We lack frameworks that can:

Embed security requirements explicitly into ML workflows from the start,

Continuously enforce and monitor these requirements across all pipeline stages, and

Adapt to evolving threats without slowing down the pace of deployment.

Without such an approach, organizations risk deploying AI systems that are performant but fragile, exposing them to critical security and privacy breaches.

Objectives

The PhD will investigate the foundations and practical mechanisms of DevSecMLOps. The specifics of security in MLOps will mainly concern privacy. Users of ML-based solutions are legitimately concerned about the future of their data (e.g., where it is stored and who has access to it), and data anonymization is a key concern. The other facet of security (e.g., who is responsible in the event of a security problem?, how to ensure that ML models are robust against attacks and cannot be used maliciously) will also have to be taken into account. The research will focus on embedding security requirements directly into ML workflows, ensuring that threats such as data poisoning, adversarial manipulation, and privacy leakage are anticipated and mitigated early. It will also explore AI-driven automation to support continuous security checks, balancing the rigour of security with the agility of continuous delivery. The expected result is a methodological and technical framework that operationalizes security for ML pipelines, enabling organizations to deploy AI systems that are both performant and trustworthy.

Mission

The PhD candidate will conduct a comprehensive study of vulnerabilities across ML lifecycles, identify the security issues associated with current MLOPs practices, and analyze how existing DevSecOps principles can be extended to MLOps. The candidate will design security-by-design mechanisms tailored to ML workflows, from data ingestion and preprocessing to model training and deployment. These mechanisms should be developed, while acknowledging that those systems evolve rapidly. The candidate will also explore the use of machine learning for automating security checks, generating adversarial tests, and detecting pipeline anomalies. Finally, the proposed solutions will be validated through industrial case studies (from Softeam Group), demonstrating their effectiveness in mitigating threats while maintaining reproducibility and delivery speed.

References

[1] X. Zhang, ‘Conceptualizing, Applying and Evaluating SecMLOps: A Paradigm for Embedding Security into the ML Lifecycle’, Carleton University, 2025. Accessed: Sept. 08, 2025. [Online]. Available: https://hdl.handle.net/20.500.14718/43535  ;

[2] B. Eken, S. Pallewatta, N. Tran, A. Tosun, and M. A. Babar, ‘A Multivocal Review of MLOps Practices, Challenges and Open Issues’, ACM Comput. Surv., July 2025, doi: 10.1145/3747346.

[5] Enoiu, E. P., Truscan, D., Sadovykh, A., & Mallouli, W. (2023, August). VeriDevOps software methodology: security verification and validation for DevOps practices. In Proceedings of the 18th International Conference on Availability, Reliability and Security (pp. 1-9).

[6] Nigmatullin, I., Sadovykh, A., Messe, N., Ebersold, S., & Bruel, J. M. (2022, April). RQCODE–Towards Object-Oriented Requirements in the Software Security Domain. In IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW 2022) (pp. 2-6). IEEE.

Master’s students or professionals with a Master’s degree

Required skills: MLOps, software engineering, cybersecurity

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.