Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Language: In search of a neural code

Jean-Remi King1, Linnea Evanson1,2, Hubert Banville1, Lucy Zhang2; 1Meta, 2Hôpital Fondation Adolphe de Rothschild

Abstract

In just a few years, language models have become a backbone for artificial intelligence (AI). Beyond its technical feat, this paradigm shift revives foundational questions of cognitive neuroscience, and in particular, how and why humans acquire, represent, and process language. In this talk, we will show how a systematic comparison between AI models and the human brain helps reveal major principles of the organization of natural language comprehension, production, and acquisition during the early years of human development. The results, based on a single analytical pipeline across more than a thousand individuals, show that fMRI, MEG, and intracranial recordings consistently highlight the hierarchical nature of language representations. They further reveal that their unfolding over time depends on a specific ‘dynamic neural code’, which allows a sequence of elements (e.g., words) to be represented simultaneously in brain activity, while preserving both their elementary and compositional structures. By bridging neuroscience, linguistics, and AI, these results provide an operational framework for uncovering the general principles that govern the acquisition, structuring, and manipulation of knowledge in biological and artificial neural networks.

Tutorial Outline

In the following tutorial, our `Brain and AI` team at Meta will guide the participants through the creation of a simple but scalable linear and deep learning decoding pipeline for MEG and fMRI during natural language processing tasks.