Activez les alertes d’offres d’emploi par e-mail !

PhD Position F/M Private and Fair Machine Learning

European Commission

France

Sur place

EUR 60 000 - 80 000

Plein temps

Il y a 13 jours

Générez un CV personnalisé en quelques minutes

Décrochez un entretien et gagnez plus. En savoir plus

Repartez de zéro ou importez un CV existant

Résumé du poste

Le Commission Européenne recherche un doctorant pour une thèse sur l'IA, en particulier sur la confidentialité et l'équité en apprentissage machine. Vous travaillerez avec des experts aux institutions de renommée mondiale tout en bénéficiant d'une formation d'excellence et en participant à des projets innovants.

Prestations

Remboursement partiel des frais de transport
7 semaines de congés annuels + 10 jours RTT
Télétravail possible après 6 mois
Matériel professionnel fourni
Accès à des événements sociaux et culturels
Formation professionnelle
Couverture de sécurité sociale

Qualifications

  • Bonne compréhension des mécanismes de la vie privée et de l'équité est un plus.
  • Compétences analytiques solides requises.

Responsabilités

  • Mener à bien le projet de recherche sur l'IA avec un focus sur la confidentialité.
  • Collaborer avec les partenaires du projet sur la recherche appliquée.
  • Diffuser les résultats de recherche à des conférences.

Connaissances

Analyse
Programmation en Python
Deep learning
Probabilité/statistiques
Fluence en anglais

Description du poste

Inria, the French national research institute for the digital sciences

Organisation/Company Inria, the French national research institute for the digital sciences Research Field Computer science Researcher Profile First Stage Researcher (R1) Country France Application Deadline 26 Jul 2025 - 00:00 (UTC) Type of Contract Temporary Job Status Full-time Hours Per Week 38.5 Offer Starting Date 1 Oct 2025 Is the job funded through the EU Research Framework Programme? Not funded by a EU programme Reference Number 2025-08971 Is the Job related to staff position within a Research Infrastructure? No

Offer Description

Context. This PhD thesis is part of the ANR JCJC projectAI-PULSE (Aligning Privacy, Utility, and Fairness for Responsible AI), coordinated by Héber H. Arcolezi. AI-PULSE, which started in March 2025, aims to design machine learning models that are both differentially private and fairness-aware.

The thesis will be conducted under a co-tutelle agreement between Inria Grenoble (France) and ÉTS Montreal (Canada), leveraging the complementary strengths of both institutions.

The envisioned plan is for the recruited PhD student to spend approximately two years at Inria Grenoble (Privatics team), followed by two years at ÉTS Montreal. This structure will provide a rich and balanced international training environment, enabling the student to benefit from diverse expertise, ecosystems, and research cultures.

The thesis will be co-supervised by:

  • Héber H. Arcolezi , Researcher at Inria and in-coming Assistant Professor at ÉTS Montreal (February 2026).

The PhD project will contribute to the core objectives of AI-PULSE, with a particular focus on advancing methodologies that jointly address privacy and fairness in machine learning under local differential privacy. The student will also benefit from the broader international collaborations and mobility opportunities enabled by the co-tutelle agreement, further strengthening the international dimension of their training and research.

Assignment. Modern Machine Learning (ML) systems increasingly drive automated decision-making across sectors like healthcare, finance, and public policy. While these systems can offer remarkable benefits, they also raise critical concerns regarding individual privacy and algorithmic fairness.

Differential Privacy (DP) [1] has emerged as a gold-standard privacy notion for balancing the privacy-utility trade-off in data analytics. However, the central (server-side) approach to DP requires trust in a third party holding the raw data. Hence, there is growing interest in Local Differential Privacy (LDP) [2], which performs data obfuscation directly at the user’s side, removing the need to trust a central server with unprotected personal information.

Meanwhile, fairness in ML [3] typically involves mitigating disparate impact among subgroups defined by sensitive attributes (e.g., race, gender). Yet, fairness interventions often require exactly the sort of sensitive information that privacy mechanisms try to hide. In addition, privacy and fairness can inadvertently impact each other, bringing new privacy-fairness-utility trade-offs.

Specifically, the interplay between privacy and fairness is both crucial and nuanced:
- Satisfying DP can unintentionally worsen fairness if certain subpopulations are more sensitive to noise [4].
- Enforcing fairness can unintentionally worsen privacy [5].

Therefore, the aim of this PhD thesis is to design, analyze, and implement differential privacy mechanisms that foster equitable ML outcomes while preserving model performance (utility). A key objective will then be to design a comprehensive open-source Python framework, offering ready-to-use building blocks for private and fair machine learning.

Research Objective. The goal of this PhD thesis is to advance the state of the art at the intersection of privacy and fairness by designing new representation learning techniques under LDP that enable fair and effective ML models. In particular, we aim to develop new locally private representations that preserve the utility of the data for downstream tasks while facilitating fair learning, even when sensitive attributes are obfuscated or partially unavailable.

The thesis will focus on:

  • Designing new LDP-based data representations or embeddings optimized for fairness-aware ML.
  • Theoretically analyzing the privacy-fairness-utility trade-offs of these representations.
  • Experimentally validating the proposed methods on benchmark datasets.

The expected outcomes of this thesis will contribute both to the core objectives of AI-PULSE and to the broader responsible AI community by offering practical tools and theoretical insights for privacy-preserving and fair ML in decentralized settings.

Selected references:

[1] Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Foundations and Trends in Theoretical Computer Science 9.3–4 (2014): 211-407.
[2] Yang, Mengmeng, et al. "Local differential privacy and its applications: A comprehensive survey." Computer Standards & Interfaces 89 (2024): 103827.
[3] Barocas, Solon, Moritz Hardt, and Arvind Narayanan. Fairness and machine learning: Limitations and opportunities. MIT press, 2023.
[4] Bagdasaryan, Eugene, Omid Poursaeed, and Vitaly Shmatikov. "Differential privacy has disparate impact on model accuracy." Advances in neural information processing systems 32 (2019).
[5] Chang, Hongyan, and Reza Shokri. "On the privacy risks of algorithmic fairness." 2021 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2021.

Main activities:

  • Carry out the PhD research project on Differentially Private and Fairness-Aware Machine Learning.
  • Collaborate with other team members and with project partners (e.g., UQAM, Federal University of Ceará, Inria, ÉTS Montréal).
  • Disseminate research results through publications and presentations at international conferences.
  • Good programming skills in Python and good analytical skills.
  • A good background in probability/statistics and deep learning is expected.
  • Knowledge of differential privacy and/or fairness is a plus, but not necessary.
  • The candidate should be fluent in English.
Languages FRENCH Level Basic

Languages ENGLISH Level Good

Additional Information
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage
Selection process

Applications must include a CV, covering letter, copy of diploma and valid proof of disabled worker status.

Applications must be submitted online via the Inria website. Processing of applications submitted via other channels is not guaranteed.

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.