DESCRIPTION :
The List Institute at CEA Tech (CEA's technological research division), dedicate its activities to driving innovation in intelligent digital systems. The specialized R&D programs aim to carry out technological developments of excellence in critical industry sectors and by partnering with key industry and academic actors.
Within the LIST Institute, at the heart of the Paris-Saclay Campus (Essonne), the Embedded and Autonomous Systems Design Laboratory (LSEA) works on methods and tools for the design & development of trustworthy autonomous systems that incorporate AI-based components. In particular, the LSEA's Trustworthy Deep Learning (TDL) team conducts research on confidence (uncertainty) representation and monitoring in deep neural networks (DNNs) for computer vision tasks and automated robots.
Mission
In recent years, large language models (LLMs) have demonstrated remarkable proficiency in a wide variety of natural language processing tasks such as reasoning and question-answering. For this reason, they are increasingly deployed in real-world settings, including safety-critical domains such as medicine (medical diagnosis) and automated robots (that interact with humans). Unfortunately, LLMs have a tendency to "hallucinate," i.e., produce predictions that are nonsensical or unfaithful while facing unfamiliar queries. This limitation hinders a wider adoption of LLMs within safety-critical domains as it is paramount for these models to provide or elicit a notion of trust in their predictions.
By using a prediction confidence measure, LLMs should have the capacity to not offer incorrect answers when presented with unfamiliar questions, contexts, or unsolvable problems. Towards building reliable and safe automated agents and systems, several works have proposed methods to express confidence in LLMs. Thus, In this internship, we seek to apply uncertainty estimation methods in LLMs to detect hallucinations in code generation and/or code translation tasks.
Internship Objectives
* Study the State-of-the-Art methods for uncertainty estimation and hallucination detection in LLMs.
* Evaluate and compare common methods for uncertainty estimation and hallucination detection.
* Challenge existing methods by identifying vulnerabilities.
* Design improvements to existing methods.
Code d'emploi : Ingénieur en Intelligence Artificielle (h/f)
Domaine professionnel actuel : IT R&D Professionals
Niveau de formation : Bac+8
Temps partiel / Temps plein : Plein temps
Type de contrat : Stage/Jeune diplômé
Compétences : Intelligence Artificielle, Réseaux de Neurones Artificiels, Vision par Ordinateur, Génération de Code, Technologie Digitale, Python (Langage de Programmation), Traitement du Langage Naturel, Pytorch, Large Language Models, Deep Learning, Honnêteté, Systèmes Automatisés, Système Autonome, Diagnostic (Médecine), Médecine, Recherche et Développement, Conception et Réalisation en Robotique, Vie des Systèmes Critiques
Courriel :
internet.saclay@cea.fr
Téléphone :
0160833031
Type d'annonceur : Employeur direct