Activez les alertes d’offres d’emploi par e-mail !
Générez un CV personnalisé en quelques minutes
Décrochez un entretien et gagnez plus. En savoir plus
Le Commission Européenne recherche un doctorant pour une thèse sur l'IA, en particulier sur la confidentialité et l'équité en apprentissage machine. Vous travaillerez avec des experts aux institutions de renommée mondiale tout en bénéficiant d'une formation d'excellence et en participant à des projets innovants.
Inria, the French national research institute for the digital sciences
Organisation/Company Inria, the French national research institute for the digital sciences Research Field Computer science Researcher Profile First Stage Researcher (R1) Country France Application Deadline 26 Jul 2025 - 00:00 (UTC) Type of Contract Temporary Job Status Full-time Hours Per Week 38.5 Offer Starting Date 1 Oct 2025 Is the job funded through the EU Research Framework Programme? Not funded by a EU programme Reference Number 2025-08971 Is the Job related to staff position within a Research Infrastructure? No
Context. This PhD thesis is part of the ANR JCJC projectAI-PULSE (Aligning Privacy, Utility, and Fairness for Responsible AI), coordinated by Héber H. Arcolezi. AI-PULSE, which started in March 2025, aims to design machine learning models that are both differentially private and fairness-aware.
The thesis will be conducted under a co-tutelle agreement between Inria Grenoble (France) and ÉTS Montreal (Canada), leveraging the complementary strengths of both institutions.
The envisioned plan is for the recruited PhD student to spend approximately two years at Inria Grenoble (Privatics team), followed by two years at ÉTS Montreal. This structure will provide a rich and balanced international training environment, enabling the student to benefit from diverse expertise, ecosystems, and research cultures.
The thesis will be co-supervised by:
The PhD project will contribute to the core objectives of AI-PULSE, with a particular focus on advancing methodologies that jointly address privacy and fairness in machine learning under local differential privacy. The student will also benefit from the broader international collaborations and mobility opportunities enabled by the co-tutelle agreement, further strengthening the international dimension of their training and research.
Assignment. Modern Machine Learning (ML) systems increasingly drive automated decision-making across sectors like healthcare, finance, and public policy. While these systems can offer remarkable benefits, they also raise critical concerns regarding individual privacy and algorithmic fairness.
Differential Privacy (DP) [1] has emerged as a gold-standard privacy notion for balancing the privacy-utility trade-off in data analytics. However, the central (server-side) approach to DP requires trust in a third party holding the raw data. Hence, there is growing interest in Local Differential Privacy (LDP) [2], which performs data obfuscation directly at the user’s side, removing the need to trust a central server with unprotected personal information.
Meanwhile, fairness in ML [3] typically involves mitigating disparate impact among subgroups defined by sensitive attributes (e.g., race, gender). Yet, fairness interventions often require exactly the sort of sensitive information that privacy mechanisms try to hide. In addition, privacy and fairness can inadvertently impact each other, bringing new privacy-fairness-utility trade-offs.
Specifically, the interplay between privacy and fairness is both crucial and nuanced:
- Satisfying DP can unintentionally worsen fairness if certain subpopulations are more sensitive to noise [4].
- Enforcing fairness can unintentionally worsen privacy [5].
Therefore, the aim of this PhD thesis is to design, analyze, and implement differential privacy mechanisms that foster equitable ML outcomes while preserving model performance (utility). A key objective will then be to design a comprehensive open-source Python framework, offering ready-to-use building blocks for private and fair machine learning.
Research Objective. The goal of this PhD thesis is to advance the state of the art at the intersection of privacy and fairness by designing new representation learning techniques under LDP that enable fair and effective ML models. In particular, we aim to develop new locally private representations that preserve the utility of the data for downstream tasks while facilitating fair learning, even when sensitive attributes are obfuscated or partially unavailable.
The thesis will focus on:
The expected outcomes of this thesis will contribute both to the core objectives of AI-PULSE and to the broader responsible AI community by offering practical tools and theoretical insights for privacy-preserving and fair ML in decentralized settings.
Selected references:
[1] Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends in Theoretical Computer Science 9.3–4 (2014): 211-407.
[2] Yang, Mengmeng, et al. "Local differential privacy and its applications: A comprehensive survey." Computer Standards & Interfaces 89 (2024): 103827.
[3] Barocas, Solon, Moritz Hardt, and Arvind Narayanan. Fairness and machine learning: Limitations and opportunities. MIT press, 2023.
[4] Bagdasaryan, Eugene, Omid Poursaeed, and Vitaly Shmatikov. "Differential privacy has disparate impact on model accuracy." Advances in neural information processing systems 32 (2019).
[5] Chang, Hongyan, and Reza Shokri. "On the privacy risks of algorithmic fairness." 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2021.
Main activities:
Languages ENGLISH Level Good
Applications must include a CV, covering letter, copy of diploma and valid proof of disabled worker status.
Applications must be submitted online via the Inria website. Processing of applications submitted via other channels is not guaranteed.