
Activez les alertes d’offres d’emploi par e-mail !
Générez un CV personnalisé en quelques minutes
Décrochez un entretien et gagnez plus. En savoir plus
A leading research institution in France is offering a PhD position focused on developing methodologies for improving the robustness of AI in embedded systems. This role involves conducting experiments related to safety and security in autonomous technologies, requiring candidates to hold a Master's degree in a relevant field. Join a project that addresses critical challenges in embedded AI modules used in sectors like healthcare and automotive.
Organisation/Company Grenoble INP - LCIS Research Field Computer science » Informatics Engineering » Electronic engineering Researcher Profile Recognised Researcher (R2) Leading Researcher (R4) First Stage Researcher (R1) Established Researcher (R3) Country France Application Deadline 5 Feb 2026 - 22:00 (UTC) Type of Contract Temporary Job Status Full-time Offer Starting Date 1 Jan 2026 Is the job funded through the EU Research Framework Programme? Not funded by a EU programme Is the Job related to staff position within a Research Infrastructure? No
The growing integration of Artificial Intelligence (AI) modules into safety-critical embedded systems (autonomous vehicles, drones, industrial and medical devices) raises major safety and security concerns. These modules, often based on deep neural networks, are sensitive to both accidental faults and intentional attacks [1]-[5] that can alter their decisions. Ensuring their robustness under real-world conditions is therefore essential for trustworthy deployment. Current approaches mainly focus on software-level adversarial robustness or high-level fault tolerance.
This PhD aims to develop a unified methodology for evaluating and improving the robustness of embedded AI modules against various real-world and physical disturbances, encompassing both safety-related faults and security-related attacks. The work will: (1) Identify, model, and reproduce representative perturbations that may cause abnormal or unsafe behavior. (2) Evaluate their effects on performance, safety, and security metrics. (3) Propose and validate mitigation and hardening techniques at the model, system, and learning levels.
The targeted application will concern multi-sensors based systems for autonomous vehicles embedding AI perception or decision modules. The experiments will rely on embedded platforms available at LCIS, including electromagnetic fault injection benches. The methodology will address both safety-related disturbances (accidental faults) and security-related threats (intentional perturbations), highlighting the differences between them in embedded AI systems.