Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall

Finding Modularity in Large Language Models: Insights from Aphasia Simulations

Chengcheng Wang1, Jixing Li1; 1City University of Hong Kong

Presenter: Jixing Li

Recent large language models (LLMs) excel at complex linguistic tasks and share computational principles with human language processing. However, it remains unclear whether their internal components specialize in distinct functions, such as semantic and syntactic processing, as seen in humans. To explore this, we selectively disrupted the components of LLM to replicate the behavioral patterns of aphasia--a disorder characterized by specific language deficits resulting from brain injury. Our experiments revealed that simulating semantic deficits akin to Wernicke’s aphasia was relatively straightforward, whereas reproducing syntactic deficits characteristic of Broca’s aphasia proved more challenging. These results highlight both parallels and divergences between the emergent modularity of LLMs and the human language system, offering new insights into information representation and processing in artificial and biological intelligence.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF