Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

The role of context in neural representational alignment to audio- and text-based language systems

Marianne De Heer Kloots1, Josef Parvizi2, Laura Gwilliams2; 1University of Amsterdam, 2Stanford University

Presenter: Marianne De Heer Kloots

Speech understanding requires integrating the current input with surrounding context. Prior research has found that increasing context size in artificial text-based language systems leads to improved predictivity of human brain activity. Here, we investigate (i) how the type of context (unidirectional; bidirectional) influences brain alignment; (ii) how the information contained in speech and text model embeddings changes as a function of context size and context type; (iii) what changes in model representations could explain brain alignment. We recorded intracranial EEG of participants listening to audiobooks, and extracted corresponding layerwise embeddings from a speech model (Wav2Vec2) and a language model (RoBERTa) under different context sizes and types. We find that context type rather than size has the biggest influence on the linear decodability of linguistic structure, the intrinsic dimensionality of the underlying representations, and ultimately, brain alignment. This work represents an important step towards understanding the representational basis of model-brain alignment, and identifies context type as an important driver of models extracting brain-relevant information.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF