Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall
Non-Monotonic Plasticity in Large Language Models
Camila Kolling1, Mariya Toneva2; 1MPI-SWS, 2Max Planck Institute for Software Systems
Presenter: Camila Kolling
Neural representations in biological memory systems change systematically during associative learning. The Non-Monotonic Plasticity Hypothesis (NMPH) proposes that these changes follow a surprising U-shaped pattern based on how strongly two items are initially related, with initially-moderately-related items becoming significantly more distinct after learning, rather than more similar. We provide the first evidence that large language models (LLMs) also exhibit this non-monotonic pattern of representational change, aligning with the NMPH observed in humans. Using an in-context associative learning paradigm, with no changes to model weights, we show that moderately similar token pairs significantly differentiate, and this differentiation occurs when accuracy is both highest and most stable across repeated item presentations. Our results suggest that LLMs can model human associative learning, offering a framework to study representational change during learning.
Topic Area: Memory, Spatial Cognition & Skill Learning
Extended Abstract: Full Text PDF