• Skip to main content

Cognitive Computational Neuroscience

  • Home
    • CCN 2025 Organizers
    • Community Diversity, Equity, and Inclusion
    • Privacy Policy
  • Account
    • Account Login
    • Create an Account
    • Account Home
  • Attend
    • Dates and Deadlines
    • Meeting Registration
    • Venue Information
    • Accommodations
    • Travel Awards
    • Caregiver Award
    • Code of Conduct
    • Local Meetups
  • Program
    • Schedule of Events
    • Welcome Reception
    • Keynote Lectures
    • Keynotes & Tutorials
    • GACs
    • Community Events
    • Mind Matching
    • Networking Lunch
    • Social Event
    • Satellite Events
  • Submissions
    • Presentation Guidelines
    • Call for Papers
    • Information on Paper Submissions
    • Call for Keynotes & Tutorials
    • Call for Community Events
    • Call for GACs
    • Call for Satellite Events
  • Archive
    • CCN 2024
    • CCN 2023
    • CCN 2022
    • CCN 2021
    • CCN 2020
    • CCN 2019
    • CCN 2018
    • CCN 2017
    • GACs
  • Contact Us

2025

Keynote Lecture: Ellie Pavlick

Keynote Lecture: Ellie Pavlick

Friday, August 15, 9:30 - 10:30 am, Room A0.01 (overflow A1.02, A1.03, C1.03)

What came first, the sum or the parts? Emergent compositionality in neural networks

Ellie Pavlick, Brown University
Associate Professor of Computer Science, Cognitive Science and Linguistics

Decades of research in cognitive science and AI have focused on compositionality as a hallmark property of the human mind. This focus can seem to frame the question as though we must classify systems as either compositional or idiomatic, cognitive or associative. In this talk, I describe a set of related but different empirical studies of how neural networks achieve, or fail to achieve, compositional behavior. I argue that these findings point to a middle ground in which traditional “symbolic” compositionality can be seen as a special case which is emergent—but nonetheless qualitatively different—from a more general associative mechanism characteristic of neural networks.

Dr. Ellie Pavlick is an Associate Professor of Computer Science, Cognitive Science, and Linguistics at Brown University, and a Research Scientist at Google Deepmind. She leads the Language Understanding and Representation (LUNAR) Lab, which seeks to understand how language "works" and to build computational models which can understand language the way that humans do. Her lab's projects focus on language broadly construed, and often includes the study of capacities more general than language, including conceptual representations, reasoning, learning, and generalization. They are interested in understanding how humans achieve these things, how computational models (especially large language models and similar types of "black box" AI systems) achieve these things, and what insights can be gained from comparing the two. We often collaborate with researchers outside of computer science, including cognitive science, neuroscience, and philosophy.

Keynote Lectures

Keynote Lectures

Anna Schapiro - Learning representations of specifics and generalities over time

Tuesday, August 12, 4:30 - 5:30 pm, Room A0.01 (overflow A1.02/A1.03/C1.03)

Yasuo Kuniyoshi - From Embodiment To Super-Embodiment: A Constructive Approach To Open-Ended And Human Aligned Intelligence/Moral

Wednesday, August 13, 8:30 - 9:30 am, Room A0.01 (overflow A1.02/A1.03/C1.03)

Ellie Pavlick - What came first, the sum or the parts? Emergent compositionality in neural networks

Friday, August 15, 9:30 - 10:30 am, Room A0.01 (overflow A1.02/A1.03/C1.03)

Pieter R. Roelfsema - Brain mechanisms for conscious visual perception of coherent objects and the technology to restore it in blindness

Friday, August 15, 5:00 - 6:00 pm, Room A0.01 (overflow A1.02/A1.03/C1.03)

David Poeppel

Time and Room TBA

Hava Siegelmann

Time and Room TBA

 

 

 

 

 

Keynote & Tutorial – Digital Brain Models for Working Memory

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Digital Brain Models for Working Memory

Jorge MejiasFrancisco Pascoa dos SantosParva AlavianRares Dorcioman

Jorge Mejias1, Francisco Pascoa dos Santos1, Parva Alavian1, Rares Dorcioman1; 1University of Amsterdam

Abstract

Digital brain models, or whole-brain models, are able to incorporate part of the complex brain connectivity obtained from neuroimaging scans, and are usually used to replicate brain activity dynamics. Their capacity to explain even basic cognitive properties is, however, extremely limited. In this keynote, we will present recent advances in which digital brain model are built to incorporate basic capabilities to perform working memory tasks. These models are biophysically oriented and constitute a proof of concept to be expanded in future studies towards more computationally powerful implementations. We will cover examples of models for the human and macaque brains, exploring how these models illustrate a paradigm shift in working memory: from considering it a local process occurring only in prefrontal cortex, to a distributed mechanism which involves multiple cooperating regions across large portions of the brain. We will highlight existing experimental evidence for this new paradigm as well as testable predictions for experimental neuroscientists.

Tutorial Outline

The goal of this tutorial is to replicate the main results of distributed working memory models (as presented in Mejias and Wang, eLife 2022) to gain a better and hands-on understanding of the distributed working memory theory (Christophel et al. TiCS 2017). The tutorial’s ideal audience are computational neuroscience researchers  (Master, PhD, postdoc, professors) interested in how cognitive functions could be embedded in biologically realistic brain networks.

We will start the tutorial by focusing on the basic mathematical elements of the model for a single brain region, along with its relevant parameters (15 min). We will then introduce a first hands-on exercise in which attendees will simulate the dynamics of a single brain region, and gain intuition on how this model can display elevated neural firing associated with working memory (20 min). The tutorial continues with a careful overview of relevant neuroanatomical data, such as the brain connectivity (10 min) and describe how to use this data to build our digital brain model (20 min). The closing activity will be a second hands-on exercise in which attendees will simulate a full digital brain model for a simple working memory task, replicating the main result of the core article (20 min) and simulating the effect of brain lesions (15 min). The total time of the tutorial will be around 1h 35min, with plenty of time to address issues and questions along the session. The tutorial will use Python (Jupyter notebooks) and standard Python libraries like NumPy to run.

Community Event – Universality and Idiosyncrasy of Perceptual Representations

Community Event

Wednesday, August 13, 10:00 am - 12:00 pm, Room C1.03

Universality and Idiosyncrasy of Perceptual Representations

Ev FedorenkoNikolaus Kriegeskorte

Moderators: Evelina Fedorenko1, Nikolaus Kriegeskorte2; 1Massachusetts Institute of Technology, 2Columbia University

Mick BonnerEghbal HosseiniBrian Cheung

Proponents: Mick Bonner1, Eghbal Hosseini2, Brian Cheung2; 1Johns Hopkins University, 2Massachusetts Institute of Technology

Jenelle FeatherAlex WilliamsTal Golan

Critics: Jenelle Feather1, Alex Williams2, Tal Golan3; 1Carnegie Mellon University, 2Flatiron Institute and New York University, 3Ben-Gurion University of the Negev

Abstract

One of the premises of current cognitive computational modeling is that not all neural network models are equally aligned with the neural circuit under investigation. Models trained on different tasks or datasets or employing different architectures acquire distinct representations, and the idiosyncratic aspects of these representations (i.e., model-specific features) vary in their alignment with biological representations. This premise motivates the systematic benchmarking of various neural networks against the brain, with the aim of approaching brain-aligned models. However, some recent studies suggest the opposite: distinct neural networks learn the same representations. Furthermore, according to the “universal representation hypothesis,” components of representations shared across neural networks are also shared with humans, whereas any idiosyncratic components—specific to individual models—are not shared with humans. The implications are significant: If the “universal representation hypothesis” holds, model comparison is futile. This event brings together proponents and critics of the universal representation hypothesis. Together, we will consider the following questions: Are neural network representations universal or do they also have nonshared features that reflect the architecture, objective, or learning rule? Are features not shared among artificial neural network representations necessarily misaligned with the brain? What empirical tests would adjudicate this debate? And what should cognitive computational neuroscience look like under each hypothesis?

Session Plan

Do all neural network models converge to a universal representation, or do their internal representations differ in ways that are meaningful for understanding the brain? This session will explore the universality and idiosyncrasy of representations in artificial and biological neural systems. The moderators will provide the necessary scientific background and define the core questions at the heart of the debate. These questions will be addressed through a series of short talks presenting contrasting perspectives, followed by a panel discussion and open audience Q&A.

Participants will gain a clearer understanding of what is meant by universal and idiosyncratic representations, why this distinction matters for cognitive computational neuroscience, and how it may shape future research. The session will examine whether model-specific components are necessarily misaligned with the brain and explore empirical criteria for adjudicating between the universality and idiosyncrasy hypotheses.

Community Event – The Algonauts Project 2025 Challenge

Community Event

Time and Room TBA

The Algonauts Project 2025 Challenge

Alessandro Gifford, Domenic Bersch, Marie St-Laurent, Basile Pinsard, Julie Boyle, Lune Bellec, Aude Oliva, Gemma Roig, Radoslaw Cichy

The Algonauts Project, first launched in 2019, is on a mission to bring biological and machine intelligence researchers together on a common platform to exchange ideas and pioneer the intelligence frontier. Inspired by the astronauts’ exploration of space, “algonauts” explore human and artificial intelligence with state-of-the-art algorithmic tools, thus advancing both fields.

The Algonauts Project 2025 challenge focuses on predicting responses in the human brain as participants perceive complex multimodal naturalistic movies. To enable data-hungry modeling, the challenge runs on data from CNeuroMod (https://www.cneuromod.ca/), the largest suitable brain dataset available–almost 80 hours of neural recordings for each of 4 human participants. To ensure the robustness and relevance of results, the challenge features a model selection process based on out-of-distribution evaluation.

During the first part of this session, the Algonauts project and challenge will be introduced, followed by talks by this year’s challenge winners. The second part of the session will be a panel discussion on challenges in cognitive computational neuroscience moderated by Alessandro Gifford, including the participation of Andreas Tolias, Fabian Sinz, Martin Schrimpf, Lune Bellec, and Radoslaw Cichy. The audience participation in the panel discussion is encouraged through both in-person contributions or digital engagement.

More information at https://algonautsproject.com/

Keynote & Tutorial – Uncovering algorithms of visual cognition with multilevel computational theories

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Uncovering algorithms of visual cognition with multilevel computational theories

Ilker YildirimMario BelledonneWenyan Bi

Ilker Yildirim1, Mario Belledonne1, Wenyan Bi1; 1Yale University

Abstract

There lies a great distance between the incoming sensory inputs and the percepts and thoughts we experience about the world, including rich representations of objects, places, agents, and events. What are the formats of these representations? How are they inferred from sensory inputs? And how do they support the rest of cognition, including reasoning and decision-making? This K&T will introduce a reverse-engineering framework to address these questions: Multilevel Computational Theories (MCTs). At the core, MCTs hypothesize that the brain builds and manipulates ‘world models’ — i.e., structure-preserving, behaviorally efficacious representations of the physical world. By leveraging advancements in probabilistic computing, computer graphics, dynamical systems, and machine learning, MCTs offer sophisticated, richly testable algorithmic formulations of visual cognition, thus penetrating neural and psychological phenomena. For pedagogical reasons, the talk and the tutorial will center around a case study of modeling the visual perception of physical scenes. The talk will present a computational model of soft object perception as probabilistic inference over a simulation-based generative model of “soft-body dynamics”. The model takes as input cloth animations, computes a posterior distribution over the physical properties of the cloth (stiffness, mass) and the scene (e.g., wind), and makes a decision by matching cloths over an individual physical property. With these components,  the model explains human visual processing across both psychophysical and fMRI experiments. The tutorial will cover the core components of this model in a simpler physical domain, introducing the audience to the toolkit necessary to implement such a model — probabilistic programming and physics-engine-based generative models.

Tutorial Outline

This tutorial will introduce a reverse-engineering approach, Multilevel Computational Theories (MCTs), to the CCN community, broadening the modeling toolbox beyond the more common deep learning approaches. The tutorial will have a pedagogical focus on the domain of the visual perception of physical scenes.

This tutorial will be most useful to researchers interested in building computational models; even though our focus will be on perception and visual cognition, MCTs have broader applicability beyond these domains. Instances of MCTs can make contact with quantitative psychophysics as well as neuroscience experiments (as the talk will demonstrate). Therefore, audience members using both behavioral and neural approaches will benefit from the tutorial. The most closely relevant technical background for the specific model we will cover is familiarity with Bayesian inference and programming experience in Julia or Python.

The flow of the tutorial will be as follows.

  1. Introduce the probabilistic programming package Gen with a toy inference problem.  This will include coding a simple generative model, introducing the key data structure “trace” of Gen, and a simple inference procedure.
  2. Introduce the PhysGen package, which provides an interface for safely implementing simulation-based generative models in Gen. 
  3. Implement a world model using the PhysGen: a simulation-based generative model of a ball bouncing on a surface. This generative model involves prior distributions over physical variables such as elasticity and mass, and the initial position and velocity of the ball.
  4. Implement an approximate Bayesian inference procedure to condition this world model on sensory inputs using the particle filtering algorithm. This algorithm approximates a posterior distribution — i.e., a perceived world model. 
  5. Implement a decision-making module that transforms the perceived world model into a decision in a 2AFC match-to-sample task. 

The tutorial will be implemented using Jupyter notebooks and Google Colab. A current implementation can be found here: https://github.com/CNCLgithub/Algorithms-of-the-Mind/blob/main/labs/lab-06/bouncing-balls.ipynb. We will ensure that the tutorial is self-contained and the material is broadly accessible, providing any necessary background accordingly.

Keynotes & Tutorials

Keynotes & Tutorials

Keynotes: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorials: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Language: In search of a neural code

Organizers: Jean-Remi King, Lucy Zhang, Linnea Evanson, Hubert Banville

Uncovering algorithms of visual cognition with multilevel computational theories

Organizers: Ilker Yildirim, Mario Belledonne, Wenyan Bi

Digital Brain Models for Working Memory

Organizers: Jorge Mejias & Francisco Pascoa

 

 

 

 

 

Keynote & Tutorial – Language: In search of a neural code

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Language: In search of a neural code

Jean-Remi King1, Linnea Evanson1,2, Hubert Banville1, Lucy Zhang2; 1Meta, 2Hôpital Fondation Adolphe de Rothschild

Abstract

In just a few years, language models have become a backbone for artificial intelligence (AI). Beyond its technical feat, this paradigm shift revives foundational questions of cognitive neuroscience, and in particular, how and why humans acquire, represent, and process language. In this talk, we will show how a systematic comparison between AI models and the human brain helps reveal major principles of the organization of natural language comprehension, production, and acquisition during the early years of human development. The results, based on a single analytical pipeline across more than a thousand individuals, show that fMRI, MEG, and intracranial recordings consistently highlight the hierarchical nature of language representations. They further reveal that their unfolding over time depends on a specific ‘dynamic neural code’, which allows a sequence of elements (e.g., words) to be represented simultaneously in brain activity, while preserving both their elementary and compositional structures. By bridging neuroscience, linguistics, and AI, these results provide an operational framework for uncovering the general principles that govern the acquisition, structuring, and manipulation of knowledge in biological and artificial neural networks.

Tutorial Outline

In the following tutorial, our `Brain and AI` team at Meta will guide the participants through the creation of a simple but scalable linear and deep learning decoding pipeline for MEG and fMRI during natural language processing tasks.

Community Event – Conversations on Consciousness: How the CCN Community Can Contribute

Community Event

Wednesday, August 13, 10:00 am - 12:00 pm, Room C1.04

Conversations on Consciousness: How the CCN Community Can Contribute

Paul Linton1, Megan Peters2, Steve Fleming3, Lars Muckli4; 1Columbia University, 2University of California, Irvine, 3University College London, 4University of Glasgow

Abstract

The CCN Community works on a diverse set of topics from perception to cognition to action. But one question that has been relatively overlooked at CCN is consciousness or subjective experience. Our Community Event explores why this is, and how to address it. The key question is how we should think about consciousness in computational terms. This topic has recently come to the fore with discussions of consciousness in AI, but our focus is the human brain: what kinds of computations appear to correlate with consciousness, and how can we model them? But also, how can we be sure we’re tracking consciousness in the first place? Our event will focus on the progress made in three ongoing Templeton adversarial collaborations. But this Community Event will also be a critical evaluation of recent developments in consciousness science, and we ask the CCN Community to reflect on what we might have missed along the way.

Session Plan

Whilst the topics many of us study lend themselves to thinking about consciousness, we believe there are three reasons why consciousness has not played a larger role at CCN, all of which our Community Event seeks to address:

1. Theories: First, we may feel consciousness science is its own distinct subfield, with its own specialized knowledge. So, the first focus of our Community Event is educational: to bring the CCN Community up to speed with recent developments in consciousness science, so that all will be better equipped to engage critically.

2. Computational Models: Second, we may feel that computational approaches have little to say about consciousness. So, the second focus of our Community Event is computational: to highlight existing computational frameworks for consciousness, and draw on the expertise of the CCN Community to develop new ways of thinking about consciousness in computational terms.

3. Experiments: Third, we may feel that collaborations between Cognitive Science, Neuroscience, and Artificial Intelligence in the context of consciousness science are already catered to by Templeton adversarial collaborations on consciousness. But CCN is uniquely placed to inform these collaborations, and to inform and participate in new lines of research they may inspire. So, the third focus of our Community Event is collaborative: presenting the work of three ongoing Templeton collaborations, and opening the discussion to the CCN Community with the aim of informing future work.

Community Event – Naturalistic Games…: A Project Co-Design Workshop

Community Event

Wednesday, August 13, 10:00 am - 12:00 pm, Room A1.02

Naturalistic Games as a Benchmark to Bridge Cognitive Science, Computational Neuroscience and AI: A Community Led, Round-Table Discussion

Jascha Achterberg1, Laurence Hunt1, Chris Summerfield1, Anna Székely2; 1University of Oxford, 2Budapest University of Technology and Economics

Abstract

In this round-table workshop, we will discuss video games as a potential testbed for comparing biological and artificially intelligent behaviour. Video games capture much of the complexity of real-world decision tasks, such as vast state spaces, multi-step action sequences, interactions with objects and agents, and dynamic interleaving of planning and execution. Yet crucially, they also present an experimentally and computationally tractable testbed which allows for experimental manipulations, and comparison of human vs. machine behaviour and internal computations. We will have short talks from researchers currently using video games as a tool for understanding human and artificial cognition. One central aim will be to identify robust, reliable benchmarks with which human and artificial agents can be compared. The main outcome of this workshop, if successful, would be a co-designed project that is shaped by input from across the CCN community, where resulting data analysis/modelling is shared across a number of labs.

Session Plan

Our workshop will begin with an opportunity for participants to give ‘pitch talks’ (30-45 minutes) in which they pitch an idea about games, benchmarks, or current ongoing research that they consider relevant. If you are attending CCN and would like to be considered for a pitch talk, please fill out our online form.

Participants will then be split into breakout groups (~45 minutes), to discuss the following questions:

  • What constitutes a useful benchmark against which to evaluate human behavioural and/or neural data during naturalistic gameplay?
  • What unique questions in cognitive science/neuroscience/AI might be addressed using naturalistic games that are difficult to address using traditional experimental design?
  • What computational models are most appropriate for comparison with human behavioural and neural data in studying naturalistic behaviour with games?

We will then reconvene for a collective discussion for the remaining time, and summarise how this might inform a future, co-designed data collection project.

  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

©2025 CCN. All rights reserved.  Contact us at

Cognitive Computational Neuroscience