Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Neural mechanisms of learning and memory: from synapses to systems
Contributed Talk Session: Friday, August 15, 11:00 am – 12:00 pm, Room C1.03
Collaborative Encoding of Visual Working Memory
Talk 1, 11:00 am – Huang Ham1, Evan Russek1, Thomas L. Griffiths1, Natalia Vélez1; 1Princeton University
Presenter: Huang Ham
Collaboration helps humans surmount individual cognitive limitations by distributing information over many minds. However, figuring out when and how to collaborate is not trivial. This study examines whether dyads split up information in a collaborative visual working memory task when doing so improves performance. Participants (N=356) memorized grids of 4, 16, or 36 images both alone and with a partner. We used a visual working memory model to estimate how much dyads would benefit from splitting up a grid of images, rather than each memorizing the grid independently. Our model predicts that participants should split up grids that are neither too easy nor too difficult to benefit from collaboration. Indeed, participants tacitly adopted conventions to split up medium and large grids---and were more accurate in these conditions when they worked together than when they acted alone---but not small grids where individual performance was already at ceiling. Our work provides a first step to understand how decisions about when and how to collaborate are shaped by the adaptive use of cognitive resources.
Computational Model for Episodic Timeline Based on a Spectrum of Synaptic Decay Rates
Talk 2, 11:10 am – James Mochizuki-Freeman1, Sara Zomorodi1, Sahaj Singh Maini2, Zoran Tiganj2; 1Indiana University, 2Indiana University, Bloomington
Presenter: Zoran Tiganj
Human episodic memory enables the retrieval of temporally organized past experiences. Retrieval cues can target both semantic content and temporal location, reflecting the multifaceted nature of episodic recall. Existing computational models provide mechanistic accounts for how temporally organized memories of the recent past (seconds to minutes) can persist in neural activity. However, it remains unclear how episodic memories can be stored and retrieved while preserving temporal structure within and across episodes. Here, we propose a computational model that uses a spectrum of synaptic decay rates to store temporally organized memories of the recent past as an episodic timeline. We characterize how the memories can be retrieved using either a memory of the recent past, specific semantic cues, or temporal addressing. This approach thus bridges short-term working memory and longer-term episodic storage, offering a computational model of how synaptic dynamics can maintain temporally structured events.
Good and consequential counterfactual outcomes are prioritized during learning
Talk 3, 11:20 am – Kate Nussenbaum1, Nathaniel D. Daw2; 1Boston University, Boston University, 2Princeton University
Presenter: Kate Nussenbaum
People can learn from actions not taken by leveraging mental models to imagine their potential consequences. However, for any given choice, the number of possible, alternative actions often exceeds the brain's capacity for simulation. Here, we develop a new task to measure behaviorally whether people selectively prioritize the counterfactual updates that are most likely to improve their future decisions. Our initial results (N = 69) indicate that people most strongly consider high-magnitude alternatives as well as those that are better than the option they selected, suggesting that people do indeed consider alternative possibilities strategically.
Experience supports performance by abstraction learning in recurrent networks
Talk 4, 11:30 am – John C Bowler1, Dua Azhar, Cambria Jensen, Hyunwoo Lee, James G Heys; 1University of Utah
Presenter: John C Bowler
Our prior experience affects the strategies we adopt during future problem solving, however, in complex problem spaces it can be difficult to isolate the key features of past experience that are critical to future progress. Therefore, we asked: how does past experience alter cognition in ways that facilitate (or hinder) future task performance? We trained Recurrent Neural Networks (RNNs) to model a complex odor timing task, using constraints derived from prior reports detailing mouse behavior and shaping procedures. RNNs subject to well designed pre-training develop lower dimensional network activity and learn a key abstraction about the temporal structure of the task, resulting in improved future performance after training on the full task. The compositional nature of learning suggests that assembling fundamental building blocks from past experiences is essential for future problem solving; however, we demonstrate that training on arbitrary sub-components of the full task is insufficient to aide learning. We replicate these findings in both the behavior and neural dynamics of mice performing the task. Additionally, analysis of the dynamical mechanisms that RNNs learn after shaping predicted unanticipated responses to novel trial types that may translate to animal behavior–which we confirm experimentally.
Reward-Prediction-Error-Guided Attention Explains Behavioral Learning Curves
Talk 5, 11:40 am – Mingze Li Leukos1, Grace W Lindsay1; 1New York University
Presenter: Mingze Li Leukos
Attention and learning are highly intertwined: past experiences determine where attention is focused at present and the focus of attention guides future experiences. In the context of reinforcement learning (RL), previous work has demonstrated how reward feedback can be used to learn a value function-based attention template (Jahn et al., 2024). Many open questions remain, however, regarding the exact way in which internal value estimates guide attentional modulation in the visual system. We explore these questions by building a perceptual model where top-down feature-selective attention is determined by an internal value function. We explore several different forms the relationship between value and attention can take in this model. We find that, to fit the unique features of the behaviorally-observed learning curve, attention should be focused on the color with highest estimated value and its strength should be inverted after large negative prediction errors. This work gives us a compact description of a latent process relating two important cognitive variables and sets the groundwork for exploring how the relationship between reward feedback and attention may vary under different tasks.