Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Neural Computations: Dynamics Across Space, Time, and Task
Contributed Talk Session: Friday, August 15, 12:00 – 1:00 pm, Room C1.03
Task-Relevant Information is Distributed Across the Cortex, but the Past is State-Dependent and Restricted to Frontal Regions
Talk 1, 12:00 pm – Lubna Shaheen Abdul1, Scott L Brincat2, Earl K Miller2, Joao Barbosa3; 1Universite Paris Diderot, 2Massachusetts Institute of Technology, 3Université Paris-Saclay
Presenter: Lubna Shaheen Abdul
Cognitive flexibility is the ability to adapt our decisions to changing demands (Braem & Egner, 2018). This behavioral feat requires matching internal states to ongoing task demands (Ashwood et al., 2022). Here, we examine the behavior and single-unit activity recorded simultaneously from 7 regions of the monkey brain (PFC, FEF, LIP, MT, V4, IT, and parietal cortex) while two monkeys performed a context-dependent decision-making task. The relevant task was cued by a specific shape on every trial, indicating whether color or motion was relevant for decision making. Using Hidden Markov Models, we identified three latent cognitive states that captured dynamic shifts in engagement and strategy throughout the task (Hulsey et al., 2024). Specifically, we found two context-dependent states, marked by high accuracy in one context. Additionally, we also found a context-independent state (“default state”) in which both stimuli were integrated and marked by higher lapse rates. Previous-trial stimuli and responses biased current-trial responses in all states, but in opposite directions and stronger in the default state. Preliminary neural decoding analysis reveals that current stimuli are broadly represented across cortical areas, while past trial features are only encoded in frontal regions (PFC and FEF). Together, these findings highlight the role of higher-order regions and internal cognitive states in shaping perceptual decisions.
Traveling Waves Integrate Spatial Information Through Time
Talk 2, 12:10 pm – Mozes Jacobs1, Roberto C. Budzinski2, Lyle Muller3, Demba E. Ba4, T. Anderson Keller4; 1Harvard University, 2University of Western Ontario, 3Western University, 4Harvard University
Presenter: Mozes Jacobs
Traveling waves of neural activity are widely observed in the brain, but their precise computational function remains unclear. One prominent hypothesis is that they enable the transfer and integration of spatial information across neural populations. However, few computational models have explored how traveling waves might be harnessed to perform such integrative processing. Drawing inspiration from the famous “Can one hear the shape of a drum?” problem -- which highlights how normal modes of wave dynamics encode geometric information -- we investigate whether similar principles can be leveraged in artificial neural networks. Specifically, we introduce convolutional recurrent neural networks that learn to produce traveling waves in their hidden states in response to visual stimuli, enabling spatial integration. By then treating these wave-like activation sequences as visual representations themselves, we obtain a powerful representational space that outperforms local feed-forward networks on tasks requiring global spatial context. In particular, we observe that traveling waves effectively expand the receptive field of locally connected neurons, supporting long-range encoding and communication of information. We demonstrate that models equipped with this mechanism solve visual semantic segmentation tasks demanding global integration, significantly outperforming local feed-forward models and rivaling non-local U-Net models with fewer parameters. As a first step toward traveling-wave-based communication and visual representation in artificial networks, our findings suggest wave-dynamics may provide efficiency and training stability benefits, while simultaneously offering a new framework for connecting models to biological recordings of neural activity.
Representational Geometry Dynamics in Networks After Long-Range Modulatory Feedback
Talk 3, 12:20 pm – Kexin Cindy Luo1, George A. Alvarez1, Talia Konkle1; 1Harvard University
Presenter: Kexin Cindy Luo
The human visual system employs extensive long-range feedback circuitry, where feedforward and feedback connections iteratively refine interpretations through reentrant loops (Di Lollo, 2012). Inspired by this neuroanatomy, a recent computational model incorporated long-range modulatory feedback into a convolutional neural network (Konkle & Alvarez, 2023). While this prior work focused on injecting an external goal signal to leverage feedback for category-based attention, here we investigated its default operation: how learned feedback intrinsically reshapes representational geometry without top-down goals. Analyzing activations from this model across two passes—feedforward versus modulated—on ImageNet data, we examined local (within-category) and global (between-category) structure. Our results demonstrate that feedback significantly compacts category clusters: exemplars move closer to prototypes, and the local structure improves as more near neighbors fall within the same category. Notably, this occurs while largely preserving global structure, as between-category distances remain relatively stable. An exploratory analysis linking local and global changes suggested a positive relationship between local compaction and prototype shifts. These findings reveal an emergent "prototype effect" where fixed long-range feedback automatically refines local representations, potentially enhancing categorical processing efficiency without disrupting overall representational organization. This suggests intrinsic feedback dynamics might contribute fundamentally to perceptual organization.
What does spatial tuning tell us about the neural computation in the hippocampus?
Talk 4, 12:30 pm – Maxime Daigle1, Kaicheng Yan1, Benjamin Corrigan, Julio Martinez-Trujillo, Pouya Bashivan1; 1McGill University
Presenter: Maxime Daigle
The hippocampus has long been regarded as a neural map of physical space, with its neurons categorized as spatially or non-spatially tuned according to their response selectivity. However, growing evidence suggests that this dichotomy oversimplifies the complex roles hippocampal neurons play in integrating spatial and non-spatial information. Through computational modeling and in-vivo electrophysiology in macaques, we show that neurons classified as spatially tuned, primarily encode linear combinations of spatial and non-spatial features, while those labeled as non-spatially tuned rely on nonlinear mechanisms. Moreover, we demonstrate that nonlinear recurrent connections are crucial for capturing the response dynamics of non-spatially tuned neurons. These findings challenge the traditional dichotomy of spatial versus non-spatial representations and instead suggest a continuum of linear and nonlinear computations.
Examining the potential functional significance of initially poor temporal acuity
Talk 5, 12:40 pm – Marin Vogelsang1, Lukas Vogelsang1, Pawan Sinha1; 1Massachusetts Institute of Technology
Presenter: Marin Vogelsang
The human visual system is remarkably immature at birth, exhibiting initially degraded spatial and temporal vision. While early spatial degradations have been proposed to provide important benefits to the developing visual system, less is known about the potential adaptive significance of early temporal immaturities. Here, we investigated this possibility computationally, using 3D convolutional neural networks trained on a temporally meaningful classification task. We systematically manipulated spatial and temporal blur when training on the Something-Something V2 dataset, which critically depends on temporal order. Analysis of learned receptive fields revealed that initial exposure to temporal blur led to longer-range temporal processing, persisting even after transitioning to clear temporal inputs. Such developmental trajectory commencing with initial temporal blur also significantly enhanced generalization performance compared to training with high temporal resolution input or corresponding spatial blur alone. These findings extend the concept of adaptive developmental degradations into the temporal domain, suggesting that immaturities in temporal vision may instantiate important mechanisms for robust perception later in life.