Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
The Representational Alignment between Humans and Language Models is implicitly driven by a concreteness effect
Cosimo Iaia1, Bhavin Choksi1, Emily Wiebers1, Gemma Roig1, Christian J. Fiebach1; 1Johann Wolfgang Goethe Universität Frankfurt am Main
Presenter: Cosimo Iaia
The nouns of our language refer to either concrete entities (like a table) or abstract concepts (like justice or love). Cognitive psychology has established that concreteness influences how words are processed. Accordingly, understanding how concreteness is represented in our mind and brain is a central question in psychology, neuroscience, and computational linguistics. While the advent of powerful language models has allowed for quantitative inquiries into the nature of semantic representations, it remains largely underexplored how they represent concreteness. Here, we used behavioral judgments to estimate semantic distances implicitly used by humans, for a set of carefully selected abstract and concrete nouns. Using Representational Similarity Analysis, we find that the representational similarity space of participants and the semantic representations of language models are significantly aligned and that both are implicitly aligned to an explicit representation of concreteness, which was obtained from our participants using an additional concreteness rating task. Importantly, using ablation experiments, we demonstrate that the human-to-model alignment is substantially driven by concreteness--not by other important word characteristics established in psycholinguistics, such as word frequency.
Topic Area: Language & Communication
Extended Abstract: Full Text PDF