DESCRIPTION :
The internship will take place at the DataMove team located in the IMAG building on the campus of Saint Martin d'Heres (Univ. Grenoble Alpes) near Grenoble. The length of the internship is 4 months minimum and the start date is flexible, but need a 2 month delay before starting the interhsip due to administrative constraints. The DataMove team is a friendly and stimulating environment that gathers Professors, Researchers, PhD and Master students all leading research on High-Performance Computing. The city of Grenoble is a student-friendly city surrounded by the Alps mountains, offering a high quality of life and where you can experience all kinds of mountain-related outdoor activities.
Mission confiée
Subject context
n supervised learning, successfully training advanced neural networks requires annotated data of sufficient quantity and quality. In natural sciences (physics, chemistry, weather modeling), observational data remains to be a limiting factor. One alternative is to numerically create synthetic training data. This offers several advantages: synthetic data can be generated at will, in potentially unlimited amounts, the quality can be degraded in a controlled manner for more robust trainings, and the coverage of the parameter space can be adapted to focus training where relevant. Today, a large variety of simulation codes to create such data are available, from computer graphics, computer engineering, computational physics, biology and chemistry, and so on. When training data is produced from simulation codes, it can be generated along with the training.
This approach has multiple benefits. First, there is no need to store and move a huge pre-created data set: float matrices of data can take terrabytes of memory, and reading them from the disk every training iteration might take more time than the iteration itself. Instead, data is stored in working memory and created "on-the-fly": when new data point is created it substitutes an old one. This allows the model to see terrabytes of data throughout its lifetime while storing only a smaller part of it at a time. Second, the training is not done with the same repeated data as in epoch-based approach. Continiously updated training set potentially improves the generalization quality of the model. More importantly, the update of the training set and creation of new data can be adaptive, driven by the observed behavior of the neural network during training. However, this adaptive data generation is a challenging question.
Active learning adresses this challenge by adaptively sampling the input parameters of simulators based on training progress, aiming to generate more relevant data. Thus, faster and higher-quality training is expected. In current approaches, active learning for simulations-based training often follows a phased algorithm: 1) generate an initial training set by uniformly sampling input points 2) (re)train the model on the trainng set 3) use feedback from the model's performance to generate/augment new training set and return to (2). Fundamentally, the methods differentiate by choice of "feedback" metric (aquisition function) and the way the next training set is created (aquisition algorithm).
Our research
Our team's research is focused on exploring and developping new online active learning methods for efficient training of surrogates -- neural networks that meant to substitute simulation codes. We have developped Breed for online adaptive surrogate training, such as Physics Informed Neural Networks (PINNs), Neural Operators, and basic Dense Neural Networks, within our MelissaDL framework that allows the training to be highly distributed and the training data to be created on-the-fly.
Our related publications
* MelissaDL x Breed: Towards Data-Efficient On-line Supervised Training of Multi-parametric Surrogates with Active Learning, SC AI4S 2024: https://hal.science/hal-04712480v1
* Training Deep Surrogate Models with Large Scale Online Learning, ICML 2023: https://hal.science/hal-04102400v1
* Loss-driven sampling within hard-to-learn areas for simulation-based neural network training, NeurIPS ML4Phys 2023: https://hal.science/hal-04305233v1
* Melissa: Simulation-Based Parallel Training, NeurIPS AI4S 2022: https://hal.science/hal-03842106v1
Principales activités
This intership is focused on investigating use of generative methods for active learning, e.g., diffusion posterior sampling to generate input points based on models uncertainty. Currently, Breed method uses importance sampling technique and loss statistics.
In the beginning, the objective is to get familiar with the domain and read about existing work: surrogates, neural operators, active learning, online training, Bayesian methods. Then -- start to work on possible generative methods for active learing (normalizing flows, diffusion models, generative-adversarial networks, energy-based models, etc.), develop and evaluate their performance through experiments with use cases such as heat equation and fluid dynamics equations. Currently, we work in a team consisting of a PhD student, a research engineer and a research director (Bruno Raffin), we have regular meetings and a daily communication - you will not be alone!, Les candidatures doivent être déposées en ligne sur le site Inria.
Le traitement des candidatures adressées par d'autres canaux n'est pas garanti.
Applications must be submitted online via the Inria website. Processing of applications submitted via other channels is not guaranteed.
Sécurité défense :
Ce poste est susceptible d'être affecté dans une zone à régime restrictif (ZRR), telle que définie dans le décret n°2011-1425 relatif à la protection du potentiel scientifique et technique de la nation (PPST). L'autorisation d'accès à une zone est délivrée par le chef d'établissement, après avis ministériel favorable, tel que défini dans l'arrêté du 03 juillet 2012, relatif à la PPST. Un avis ministériel défavorable pour un poste affecté dans une ZRR aurait pour conséquence l'annulation du recrutement.
Politique de recrutement :
Dans le cadre de sa politique diversité, tous les postes Inria sont accessibles aux personnes en situation de handicap.
Code d'emploi : Stagiaire (h/f)
Temps partiel / Temps plein : Plein temps
Type de contrat : Stage/Jeune diplômé
Compétences : Données d'Apprentissage, Intelligence Artificielle, Réseaux de Neurones Artificiels, Programmation Informatique, Génie Informatique, Infographie, Python (Langage de Programmation), Elearning, Supervised Learning, Deep Learning, Data Generation, Convivialité, Axé sur le Succès, Stabilité Émmotionnelle, Esprit d'Équipe, Curiosité, Recherche, Algorithmes, Chimie, Biologie, Technologies Éducatives, Physique Informatique, Equations, Expérimentation, Dynamique des Fluides, Sciences Naturelles, Loisirs de Plein Air, Sciences Physiques, Analyse des Besoins, Simulations, Etudes et Statistiques, Gestion Administrative, Coaching, Traitement des Demandes
Courriel :
sofya.dymchenko@inria.fr
Téléphone :
0139635511
Type d'annonceur : Employeur direct