Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Dynamics and Structure of Generalization During Reinforcement Learning in Human Brains and Artificial Networks

Shany Grossman1, Noa Hedrich, Andrew M Saxe2, Nicolas W. Schuck3; 1Max-Planck Institute, 2University College London, University of London, 3Universität Hamburg

Presenter: Shany Grossman

Goal-directed decision making amidst an overwhelming stream of sensory input requires learning learn internal representations that capture a task’s underlying structure. Importantly, such internal abstractions enable generalization. Representing an object’s shape but ignoring its color, for instance, means that anything learned about a green triangle will generalize to red triangles. Here, we investigate this dynamic interaction of task representation learning and generalization. Human participants and artificial neural networks were trained with the same contextual reinforcement learning task. Analyses of human data reveals that participants learned an abstract task structure, which becomes detectable in the orbitofrontal cortex (OFC) after learning. Recurrent neural networks trained on the same learning curriculum exhibit similar abstractions of task representations over time. Notably, we find that the similarity structure of the networks internal task representations affects how weight updates after a single example alters network behavior and representations on other trials. The networks progressing context differentiation in its internal layers hence leads to generalization of single experiences to other events within the same context. Ongoing work aims to gain a mechanistic understanding of model observations and contrast them with learning dynamics in the human brain.

Topic Area: Predictive Processing & Cognitive Control

Extended Abstract: Full Text PDF