Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall
High-level information integration in the brain via large-scale attractor dynamics
Tamas Spisak1; 1Universität Duisburg-Essen
Presenter: Tamas Spisak
Understanding how high-level information integration arises from large-scale brain activity requires bridging computational principles with neural dynamics. We propose a theoretical framework where large-scale brain dynamics emerge as trajectories around attractors in course-grained recurrent networks whose dynamics precisely map to computations. The core network model in our framework combines principles of self-organization with attractor network theory and Bayesian inference, offering a recursive, multi-level description, applicable to large-scale empirical data. A key feature of these networks is the emergent orthogonality of the attractors, which maximizes storage capacity and computational efficiency. Crucially, this orthogonality allows mapping complex attractor dynamics onto simpler, interpretable bipartite architectures, revealing how a wide variety of computations can be implemented implicitly by network-wide stochastic attractor dynamics. We propose this framework as a model for large-scale brain dynamics. Our approach aligns with previous literature and is supported by emerging evidence, such as observations of orthogonal brain attractors, akin to canonical resting state networks. The framework yields testable predictions and offers a principled yet simple approach to understanding, explaining, and predicting large-scale brain dynamics and corresponding behavior.
Topic Area: Brain Networks & Neural Dynamics
Extended Abstract: Full Text PDF