Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Moderate evidence for large language models reflecting human neurocognition during abstract reasoning
Christopher Pinier1, Claire E Stevenson1, Michael Nunez1; 1University of Amsterdam
Presenter: Christopher Pinier
Large language models (LLMs) have shown alignment with human brain activity during language tasks, but it remains unclear whether this correspondence extends to higher-order cognition such as abstract reasoning. In this study, we compared human EEG responses— specifically fixation-related potentials (FRPs) time-locked to gaze fixations onset —to the internal activations of eight open-source LLMs performing a visual abstract reasoning task. Intermediate LLM layers showed clear differentiation across reasoning pattern types, suggesting potential specialization. While the best-performing models reached human-level accuracy, they did not consistently align with human behavioral patterns. Representational similarity analysis revealed only moderate correlations between model activations and FRP data. This may reflect a lack of neural alignment in LLMs and/or that there is only some relevant cognitive signal in the FRPs. These findings highlight both the promise and limitations of using LLMs as models of human abstract reasoning.
Topic Area: Language & Communication
Extended Abstract: Full Text PDF