Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Stage-like Emergence of Task Strategies in Animals and in Neural Networks Trained by Gradient Descent
J Tyler Boyd-Meredith1, Cristofer Holobetz1, Andrew M Saxe1; 1University College London, University of London
Presenter: J Tyler Boyd-Meredith
Humans and animals learning a task often appear to adopt a series of distinct strategies before reaching expert performance. This progression could result from deliberately testing distinct hypotheses about task contingencies. However, stage-like strategy changes can also be produced by artificial neural networks (ANN) learning by gradient descent (GD) without any explicit notion of task strategy. In this setting, apparent strategies correspond to saddle points in the loss dynamics around which learning slows before accelerating toward the next fixed point. We trained mice to perform a previously developed discrimination task, which they acquired in a series of stage-like behavioral transitions. We then developed an ANN model that recapitulated these transitions. By measuring the magnitude of the gradients during learning, we determined when the network approached (decreasing norm) and escaped saddle points (increasing norm) before reaching expert performance. Our modeling results show that even simple connectionist models without explicit hypotheses can be tailored to produce stages of learning that match what we observe in animals. We propose to develop and apply a method to identify saddle points of the loss and the likely transitions between them by performing gradient descent, not on the loss function, but on the magnitude of its gradient. For this abstract, we show how this tool identifies saddle points and their connections in a toy example.
Topic Area: Memory, Spatial Cognition & Skill Learning
Extended Abstract: Full Text PDF