Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Contributed Talk Session: Friday, August 15, 11:00 am – 12:00 pm, Room C1.03
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Experience supports performance by abstraction learning in recurrent networks

John C Bowler1, Dua Azhar, Cambria Jensen, Hyunwoo Lee, James G Heys; 1University of Utah

Presenter: John C Bowler

Our prior experience affects the strategies we adopt during future problem solving, however, in complex problem spaces it can be difficult to isolate the key features of past experience that are critical to future progress. Therefore, we asked: how does past experience alter cognition in ways that facilitate (or hinder) future task performance? We trained Recurrent Neural Networks (RNNs) to model a complex odor timing task, using constraints derived from prior reports detailing mouse behavior and shaping procedures. RNNs subject to well designed pre-training develop lower dimensional network activity and learn a key abstraction about the temporal structure of the task, resulting in improved future performance after training on the full task. The compositional nature of learning suggests that assembling fundamental building blocks from past experiences is essential for future problem solving; however, we demonstrate that training on arbitrary sub-components of the full task is insufficient to aide learning. We replicate these findings in both the behavior and neural dynamics of mice performing the task. Additionally, analysis of the dynamical mechanisms that RNNs learn after shaping predicted unanticipated responses to novel trial types that may translate to animal behavior–which we confirm experimentally.

Topic Area: Memory, Spatial Cognition & Skill Learning

Extended Abstract: Full Text PDF