• Skip to main content

Cognitive Computational Neuroscience

  • Home
    • CCN 2025 Organizers
    • Community Diversity, Equity, and Inclusion
    • Privacy Policy
  • Account
    • Account Login
    • Create an Account
    • Account Home
  • Attend
    • Dates and Deadlines
    • Meeting Registration
    • Venue Information
    • Accommodations
    • Travel Awards
    • Caregiver Award
    • Code of Conduct
    • Local Meetups
  • Program
    • Schedule of Events
    • Welcome Reception
    • Keynote Lectures
    • Keynotes & Tutorials
    • GACs
    • Community Events
    • Mind Matching
    • Networking Lunch
    • Social Event
    • Satellite Events
  • Submissions
    • Call for Papers
    • Information on Paper Submissions
    • Call for Keynotes & Tutorials
    • Call for Community Events
    • Call for GACs
    • Call for Satellite Events
  • Archive
    • CCN 2024
    • CCN 2023
    • CCN 2022
    • CCN 2021
    • CCN 2020
    • CCN 2019
    • CCN 2018
    • CCN 2017
    • GACs
  • Contact Us

Keynote & Tutorial

Keynote Lectures

Keynote Lectures

Anna Schapiro - Learning representations of specifics and generalities over time

Tuesday, August 12, 4:30 - 5:30 pm, Room A0.01 (overflow A1.02/A1.03/C1.03)

Yasuo Kuniyoshi - From Embodiment To Super-Embodiment: A Constructive Approach To Open-Ended And Human Aligned Intelligence/Moral

Wednesday, August 13, 8:30 - 9:30 am, Room A0.01 (overflow A1.02/A1.03/C1.03)

Pieter R. Roelfsema - Brain mechanisms for conscious visual perception of coherent objects and the technology to restore it in blindness

Friday, August 15, 5:00 - 6:00 pm, Room A0.01 (overflow A1.02/A1.03/C1.03)

Ellie Pavlick

Time and Room TBA

David Poeppel

Time and Room TBA

Hava Siegelmann

Time and Room TBA

 

 

 

 

 

Keynote & Tutorial – Digital Brain Models for Working Memory

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Digital Brain Models for Working Memory

Jorge MejiasFrancisco Pascoa dos SantosParva AlavianRares Dorcioman

Jorge Mejias1, Francisco Pascoa dos Santos1, Parva Alavian1, Rares Dorcioman1; 1University of Amsterdam

Abstract

Digital brain models, or whole-brain models, are able to incorporate part of the complex brain connectivity obtained from neuroimaging scans, and are usually used to replicate brain activity dynamics. Their capacity to explain even basic cognitive properties is, however, extremely limited. In this keynote, we will present recent advances in which digital brain model are built to incorporate basic capabilities to perform working memory tasks. These models are biophysically oriented and constitute a proof of concept to be expanded in future studies towards more computationally powerful implementations. We will cover examples of models for the human and macaque brains, exploring how these models illustrate a paradigm shift in working memory: from considering it a local process occurring only in prefrontal cortex, to a distributed mechanism which involves multiple cooperating regions across large portions of the brain. We will highlight existing experimental evidence for this new paradigm as well as testable predictions for experimental neuroscientists.

Tutorial Outline

The goal of this tutorial is to replicate the main results of distributed working memory models (as presented in Mejias and Wang, eLife 2022) to gain a better and hands-on understanding of the distributed working memory theory (Christophel et al. TiCS 2017). The tutorial’s ideal audience are computational neuroscience researchers  (Master, PhD, postdoc, professors) interested in how cognitive functions could be embedded in biologically realistic brain networks.

We will start the tutorial by focusing on the basic mathematical elements of the model for a single brain region, along with its relevant parameters (15 min). We will then introduce a first hands-on exercise in which attendees will simulate the dynamics of a single brain region, and gain intuition on how this model can display elevated neural firing associated with working memory (20 min). The tutorial continues with a careful overview of relevant neuroanatomical data, such as the brain connectivity (10 min) and describe how to use this data to build our digital brain model (20 min). The closing activity will be a second hands-on exercise in which attendees will simulate a full digital brain model for a simple working memory task, replicating the main result of the core article (20 min) and simulating the effect of brain lesions (15 min). The total time of the tutorial will be around 1h 35min, with plenty of time to address issues and questions along the session. The tutorial will use Python (Jupyter notebooks) and standard Python libraries like NumPy to run.

Keynote & Tutorial – Uncovering algorithms of visual cognition with multilevel computational theories

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Uncovering algorithms of visual cognition with multilevel computational theories

Ilker YildirimMario BelledonneWenyan Bi

Ilker Yildirim1, Mario Belledonne1, Wenyan Bi1; 1Yale University

Abstract

There lies a great distance between the incoming sensory inputs and the percepts and thoughts we experience about the world, including rich representations of objects, places, agents, and events. What are the formats of these representations? How are they inferred from sensory inputs? And how do they support the rest of cognition, including reasoning and decision-making? This K&T will introduce a reverse-engineering framework to address these questions: Multilevel Computational Theories (MCTs). At the core, MCTs hypothesize that the brain builds and manipulates ‘world models’ — i.e., structure-preserving, behaviorally efficacious representations of the physical world. By leveraging advancements in probabilistic computing, computer graphics, dynamical systems, and machine learning, MCTs offer sophisticated, richly testable algorithmic formulations of visual cognition, thus penetrating neural and psychological phenomena. For pedagogical reasons, the talk and the tutorial will center around a case study of modeling the visual perception of physical scenes. The talk will present a computational model of soft object perception as probabilistic inference over a simulation-based generative model of “soft-body dynamics”. The model takes as input cloth animations, computes a posterior distribution over the physical properties of the cloth (stiffness, mass) and the scene (e.g., wind), and makes a decision by matching cloths over an individual physical property. With these components,  the model explains human visual processing across both psychophysical and fMRI experiments. The tutorial will cover the core components of this model in a simpler physical domain, introducing the audience to the toolkit necessary to implement such a model — probabilistic programming and physics-engine-based generative models.

Tutorial Outline

This tutorial will introduce a reverse-engineering approach, Multilevel Computational Theories (MCTs), to the CCN community, broadening the modeling toolbox beyond the more common deep learning approaches. The tutorial will have a pedagogical focus on the domain of the visual perception of physical scenes.

This tutorial will be most useful to researchers interested in building computational models; even though our focus will be on perception and visual cognition, MCTs have broader applicability beyond these domains. Instances of MCTs can make contact with quantitative psychophysics as well as neuroscience experiments (as the talk will demonstrate). Therefore, audience members using both behavioral and neural approaches will benefit from the tutorial. The most closely relevant technical background for the specific model we will cover is familiarity with Bayesian inference and programming experience in Julia or Python.

The flow of the tutorial will be as follows.

  1. Introduce the probabilistic programming package Gen with a toy inference problem.  This will include coding a simple generative model, introducing the key data structure “trace” of Gen, and a simple inference procedure.
  2. Introduce the PhysGen package, which provides an interface for safely implementing simulation-based generative models in Gen. 
  3. Implement a world model using the PhysGen: a simulation-based generative model of a ball bouncing on a surface. This generative model involves prior distributions over physical variables such as elasticity and mass, and the initial position and velocity of the ball.
  4. Implement an approximate Bayesian inference procedure to condition this world model on sensory inputs using the particle filtering algorithm. This algorithm approximates a posterior distribution — i.e., a perceived world model. 
  5. Implement a decision-making module that transforms the perceived world model into a decision in a 2AFC match-to-sample task. 

The tutorial will be implemented using Jupyter notebooks and Google Colab. A current implementation can be found here: https://github.com/CNCLgithub/Algorithms-of-the-Mind/blob/main/labs/lab-06/bouncing-balls.ipynb. We will ensure that the tutorial is self-contained and the material is broadly accessible, providing any necessary background accordingly.

Keynotes & Tutorials

Keynotes & Tutorials

Keynotes: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorials: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Language: In search of a neural code

Organizers: Jean-Remi King, Lucy Zhang, Linnea Evanson, Hubert Banville

Uncovering algorithms of visual cognition with multilevel computational theories

Organizers: Ilker Yildirim, Mario Belledonne, Wenyan Bi

Digital Brain Models for Working Memory

Organizers: Jorge Mejias & Francisco Pascoa

 

 

 

 

 

Keynote & Tutorial – Language: In search of a neural code

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Language: In search of a neural code

Jean-Remi King1, Linnea Evanson1,2, Hubert Banville1, Lucy Zhang2; 1Meta, 2Hôpital Fondation Adolphe de Rothschild

Abstract

In just a few years, language models have become a backbone for artificial intelligence (AI). Beyond its technical feat, this paradigm shift revives foundational questions of cognitive neuroscience, and in particular, how and why humans acquire, represent, and process language. In this talk, we will show how a systematic comparison between AI models and the human brain helps reveal major principles of the organization of natural language comprehension, production, and acquisition during the early years of human development. The results, based on a single analytical pipeline across more than a thousand individuals, show that fMRI, MEG, and intracranial recordings consistently highlight the hierarchical nature of language representations. They further reveal that their unfolding over time depends on a specific ‘dynamic neural code’, which allows a sequence of elements (e.g., words) to be represented simultaneously in brain activity, while preserving both their elementary and compositional structures. By bridging neuroscience, linguistics, and AI, these results provide an operational framework for uncovering the general principles that govern the acquisition, structuring, and manipulation of knowledge in biological and artificial neural networks.

Tutorial Outline

In the following tutorial, our `Brain and AI` team at Meta will guide the participants through the creation of a simple but scalable linear and deep learning decoding pipeline for MEG and fMRI during natural language processing tasks.

©2025 CCN. All rights reserved.  Contact us at

Cognitive Computational Neuroscience