Activez les alertes d’offres d’emploi par e-mail !

Post-doctoral position M/F – Hierarchical goal-oriented data compression with AI

Mitsubishi Electric Corporation

Rennes

Sur place

EUR 40 000 - 60 000

Plein temps

Aujourd’hui
Soyez parmi les premiers à postuler

Résumé du poste

A leading research and development organization in Rennes is seeking a researcher to join their Synergistic Autonomous Systems team. The role involves designing compression strategies leveraging AI for data transmission. Ideal candidates will have a PhD in Artificial Intelligence or Information Processing, proficiency in Python, and hands-on experience with neural networks. This role is a 12-month contract with immediate availability.

Qualifications

  • Proficiency in Python programming, including experience with relevant libraries and frameworks.
  • Hands-on experience with neural networks, including architecture design and training procedures.
  • Background in compression systems and entropy coding.

Responsabilités

  • Implement state-of-the-art compression and generative AI algorithms.
  • Develop and integrate technical solutions into experimental or production environments.
  • Contribute to original ideas in goal-oriented compression and related AI-driven data processing.

Connaissances

Python programming
Knowledge of neural networks

Formation

PhD in Artificial Intelligence or Information Processing
Description du poste
Research project

Mitsubishi Electric R&D Centre Europe (MERCE), located in the Rennes Atalante technology park, is a key player in the Mitsubishi Electric Group’s global research and development activities. Within MERCE, the Digital Information Systems (DIS) division hosts the Synergistic Autonomous Systems (SAS) team, which focuses its research on autonomous systems. The team places particular emphasis on the synergy between multiple systems and technological domains, including telecommunications, control, artificial intelligence, and autonomy.

Connected systems generate massive volumes of data, making transmission increasingly costly in terms of bandwidth and energy. Traditional compression methods often aim to preserve reconstruction quality, but this is not always necessary. The Hierarchical Goal-Oriented Compression with AI approach focuses instead on maintaining the performance of downstream tasks—such as detection, classification, or control—by selectively compressing and transmitting only the most relevant information.

The integration of AI has significantly transformed the field of data compression. Latent Space Modeling in standard image compression improves the estimation of the probability distribution of the latent space variable, which is essential for effective entropy coding, and helps design an encoder considering the bit size of the latent variable. Generative models rely on fine-grained probability estimation of the latent space and autoregressive sampling, demonstrating the power of transformers neural networks in encoding and modeling complex data distributions. These developments show that modern neural architectures offer powerful tools for both information encoding and probability estimation, which are central to efficient compression. Recent work has begun to highlight strong connections between compression and generative AI.

Hierarchical and Adaptive Encoding for Digital Twins. In applications such as digital twin updates, a hierarchical goal-oriented strategy is crucial. Systems might first transmit coarse information—like object positions and sizes—and then send finer details as needed, depending on available resources and task requirements. The system should also identify information useful for several tasks and learn to extract relevant common information.

Research Objectives. This research aims to explore how to best leverage these recent technical advances to design compression strategies that are both hierarchical and goal-oriented, enabling efficient and intelligent data transmission tailored to specific tasks. We aim at improving and adapting to the considered scenario existing work of this field such as references [7][8][9][10].

Details research objectives
  • Implement state-of-the-art compression and generative AI algorithms by reproducing training procedures across diverse datasets to validate and benchmark performance.
  • Develop and integrate technical solutions proposed by MERCE researchers into experimental or production environments.
  • Contribute to original ideas and innovations to advance research in goal-oriented compression and related AI-driven data processing techniques.
Prerequisites
  • PhD in Artificial Intelligence or Information Processing, with a focus on data compression or information theory.
  • Proficiency in Python programming, including experience with relevant libraries and frameworks.
  • Hands-on experience with neural networks, including architecture design and training procedures.
  • Background in compression systems and entropy coding, with knowledge of information theory is considered a strong asset.

Vincent CORLAY, Senior Researcher

Duration

12 months

Period

As soon as possible from Oct 25

Contact

Magali BRANCHEREAU, HR Manager (jobs@fr.merce.mee.com)

Please send us your application (CV and cover letter in PDF format), specifying the reference of the job offer.

References

[1] D. Minnen, J. Ballé, and G. D. Toderici. Joint Autoregressive and Hierarchical Priors for Learned Image Compression. In Advances in Neural Information Processing Systems, 31, 2018

[2] Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, Zehuan Yuan. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation. arXiv preprint arXiv:2406.06525, 2024.

[3] X. Chen, N. Mishra, M. Rohaninejad, P. Abbeel. Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction. Advances in Neural Information Processing Systems, 33, 2020.

[4] C. S. K. Valmeekam, K. Narayanan, D. Kalathil, J.-F. Chamberland, S. Shakkottai. LLMZip: Lossless Text Compression using Large Language Models. arXiv preprint arXiv:2306.04050, 2023.

[5] G. Delétang, A. Ruoss, P.-A. Duquenne, E. Catt, T. Genewein, C. Mattern, J. Grau-Moya, L. K. Wenliang, M. Aitchison, L. Orseau, M. Hutter, J. Veness. Language Modeling Is Compression. Proceedings of the International Conference on Learning Representations (ICLR), 2024.

[9] J. Machado de Freitas, S. Berg, B. C. Geiger, M. Mücke. Compressed Hierarchical Representations for Multi-Task Learning and Task Clustering. Proceedings of the International Joint Conference on Neural Networks (IJCNN), arXiv preprint arXiv:2205.15882, 2022.

[10] Z. Kang, K. Grauman, F. Sha. Learning with Whom to Share in Multi-task Feature Learning. Proceedings of the 28th International Conference on Machine Learning (ICML), Bellevue, WA, USA, 2011.

Obtenez votre examen gratuit et confidentiel de votre CV.
ou faites glisser et déposez un fichier PDF, DOC, DOCX, ODT ou PAGES jusqu’à 5 Mo.