Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

On whether the relationship between large language models and brain activity is language-specific

Sertug Gürel1, Alessandro Lopopolo1, Milena Rabovsky1; 1Universität Potsdam

Presenter: Sertug Gürel

Using large language models (LLMs), such as GPT-2, to study language processing in both machines and humans has become increasingly prevalent. Existing literature demonstrates that these models are strong predictors of human brain activity (Schrimpf et al., 2021), which has been taken to indicate that LLMs are good models for language processing in the human brain. The current study aimed to assess whether these models’ predictive performance of brain activity is specific to brain regions involved in language processing and whether or not the prediction of functionally different brain regions relies on different features of the LLMs' hidden layers. Our results suggest that LLMs' ability to predict brain activation does not strongly differ between language and non-language-related brain areas. The set of features that drive prediction performance across areas is not entirely the same, but there is a considerable correlation between the features that language-related and non-language-related regions rely on for brain predictions. Hence, we suggest that more research is needed to understand the nature of the information that drives brain predictions in LLMs.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF