• Skip to main content

Cognitive Computational Neuroscience

  • Home
    • CCN 2025 Organizers
    • Community Diversity, Equity, and Inclusion
    • Privacy Policy
  • Account
    • Account Login
    • Create an Account
    • Account Home
  • Attend
    • Dates and Deadlines
    • Meeting Registration
    • Venue Information
    • Accommodations
    • Travel Awards
    • Caregiver Award
    • Code of Conduct
    • Local Meetups
  • Events
    • Schedule of Events
    • Welcome Reception
    • Community Events
    • Mind Matching
    • Networking Lunch
    • Social Event
    • Satellite Events
  • Presentations
    • Keynote Lectures
    • Keynotes & Tutorials
    • GACs
    • Contributed Talks
    • Poster Sessions
    • Search Papers
  • Submissions
    • Presentation Guidelines
    • Call for Papers
    • Information on Paper Submissions
    • Call for Keynotes & Tutorials
    • Call for Community Events
    • Call for GACs
    • Call for Satellite Events
  • Archive
    • CCN 2024
    • CCN 2023
    • CCN 2022
    • CCN 2021
    • CCN 2020
    • CCN 2019
    • CCN 2018
    • CCN 2017
    • GACs
  • Contact Us

2025

Visit Amsterdam

Visit Amsterdam

  • Discover Amsterdam: https://www.iamsterdam.com/en
  • Local transport: https://www.iamsterdam.com/en/travel-stay/getting-around / https://www.gvb.nl/en/visit-amsterdam/gvb-public-transport-company-amsterdam
    • GVB (Tram, Bus, Metro)
    • OV Chipcard, Bank-/Credit card
    • Bike rental (explore like a local)
  • Restaurant and bars: https://www.iamsterdam.com/en/see-and-do/restaurant-and-bars
  • See and do: https://www.iamsterdam.com/en/see-and-do
    • Marineterrein (picknicking, swimming)
    • NDSM werf (former shipyard, now cultural hotspot)
    • 9 straatjes ("9 streets", shopping)
    • Jordaan (neighborhood for shopping, bars)
    • Red light district
    • Nieuwmarkt (square with bars)
    • Dappermarkt (market, mo-su 10am-5pm)
    • Oosterpark (park, picknicking)
    • Albert Cuypmarkt (market, mo-sa 9am-6pm)
    • Noordermarkt (market, sa 9am-4pm, mo 9am-2pm)
    • Winkel 43 (best apple pie)
    • Amsterdam toren: https://www.adamlookout.com/
  • Art and culture: https://www.iamsterdam.com/en/explore/neighbourhoods/centrum/art-and-culture
  • Family and kids: https://www.iamsterdam.com/en/see-and-do/family-and-kids
    • NEMO science museum
    • ARTIS (zoo)
    • Hortus Botanicus (plants)
    • Wereldmuseum
    • Het scheepvaartmuseum

Local Food Options

Local Food Options

 

Lunch places

  • REC Cafeteria
  • Cafe CREA
  • Kumpir Food Truck
  • Bagels & Beans
  • Albert Heijn (Supermarket)
  • Lebkov & Sons (Sandwiches)
  • Sagra Sandwiches
  • Box Sociaal
  • Bloem
  • Coffee Company

 

Dinner places

  • La Vallade
  • SOTTO Pizza
  • Cafe Kadijk
  • Cafe-Restaurant De Plantage
  • Restaurant Entrepot
  • Loetje Oost (the "best" steak)

 

Bars

  • CREA
  • Gollem Bier Bar
  • Cafe De Druif
  • Cafe de Krater
  • Kriterion (also cinema)
  • De Groene Olifant
  • Cafe Eik en Linde
  • De Biertuin Linnaeusstraat
  • Brouwerij Poesiat & Kater
  • Cafe de Jeugd
  • Brouwerij 't IJ - Proeflokaal de Molen
  • Bar Joost

 

 

Keynote Lecture: David Poeppel

Keynote Lecture: David Poeppel

Thursday, August 14, 8:30 – 9:30 am, Room A0.01
Overflow Rooms A1.02, A1.03, C1.03

Rhythms and Algorithms: From Vibrations in the Ear to Abstractions in the Brain

David Poeppel
Max Planck Society and New York University

The brain has rhythms - and so do music and speech. Experimental work reveals that the temporal structure of speech and music and the temporal organization of various brain structures align in systematic ways. The role that brain rhythms play in perception and cognition is vigorously debated and continues to be elucidated through neurophysiological and computational studies of various types. I describe some intuitively simple but surprising results that illuminate the temporal structure of perceptual experience. From recognizing speech to building abstract mental structures, how the brain uses time to construct and represent perceptual and cognitive representation reveals unexpected puzzles, including in the context of auditory perception and language comprehension.

David Poeppel is a Professor of Psychology and Neural Science at NYU since 2009 and a Director in the Max Planck Society from 2014-2025. Trained at MIT in cognitive science, linguistics, and neuroscience, David did his post-doctoral training at UCSF. From 1998 until 2008, he was a professor at the University of Maryland College Park, where he ran the Cognitive Neuroscience of Language laboratory. From 2014-2021, he was the Director of the Department of Neuroscience at the Max Planck Institute for Empirical Aesthetics in Frankfurt. From 2021-2024, he was the CEO of the Ernst Strüngmann Institute for Neuroscience in Frankfurt. He has been a Fellow at the Wissenschaftskolleg zu Berlin and The American Academy Berlin, and a guest professor at many institutions. He is a Fellow of the American Association for the Advancement of Science and a member of the German Academy of Science Leopoldina. David’s research focuses on the brain basis of language, speech, and hearing, and his empirical and theoretical contributions have contributed to a deeper understanding of the links between neurobiology and cognition.

Keynote Lecture: Ellie Pavlick

Keynote Lecture: Ellie Pavlick

Friday, August 15, 9:30 – 10:30 am, Room A0.01
Overflow Rooms A1.02, A1.03, C1.03

What came first, the sum or the parts? Emergent compositionality in neural networks

Ellie Pavlick, Brown University
Associate Professor of Computer Science, Cognitive Science and Linguistics

Decades of research in cognitive science and AI have focused on compositionality as a hallmark property of the human mind. This focus can seem to frame the question as though we must classify systems as either compositional or idiomatic, cognitive or associative. In this talk, I describe a set of related but different empirical studies of how neural networks achieve, or fail to achieve, compositional behavior. I argue that these findings point to a middle ground in which traditional “symbolic” compositionality can be seen as a special case which is emergent—but nonetheless qualitatively different—from a more general associative mechanism characteristic of neural networks.

Dr. Ellie Pavlick is an Associate Professor of Computer Science, Cognitive Science, and Linguistics at Brown University, and a Research Scientist at Google Deepmind. She leads the Language Understanding and Representation (LUNAR) Lab, which seeks to understand how language "works" and to build computational models which can understand language the way that humans do. Her lab's projects focus on language broadly construed, and often includes the study of capacities more general than language, including conceptual representations, reasoning, learning, and generalization. They are interested in understanding how humans achieve these things, how computational models (especially large language models and similar types of "black box" AI systems) achieve these things, and what insights can be gained from comparing the two. We often collaborate with researchers outside of computer science, including cognitive science, neuroscience, and philosophy.

Keynote Lectures

Keynote Lectures

Anna Schapiro - Learning representations of specifics and generalities over time

Tuesday, August 12, 4:30 - 5:30 pm, Room A0.01 (overflow A1.02/A1.03/C1.03)

Yasuo Kuniyoshi - From Embodiment To Super-Embodiment: A Constructive Approach To Open-Ended And Human Aligned Intelligence/Moral

Wednesday, August 13, 8:30 - 9:30 am, Room A0.01 (overflow A1.02/A1.03/C1.03)

Ellie Pavlick - What came first, the sum or the parts? Emergent compositionality in neural networks

Friday, August 15, 9:30 - 10:30 am, Room A0.01 (overflow A1.02/A1.03/C1.03)

Pieter R. Roelfsema - Brain mechanisms for conscious visual perception of coherent objects and the technology to restore it in blindness

Friday, August 15, 5:00 - 6:00 pm, Room A0.01 (overflow A1.02/A1.03/C1.03)

David Poeppel

Time and Room TBA

Hava Siegelmann

Time and Room TBA

 

 

 

 

 

Keynote & Tutorial – Digital Brain Models for Working Memory

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 – 3:45 pm, Room A0.01
Tutorial: Thursday, August 14, 4:15 – 6:00 pm, Room A1.03

Digital Brain Models for Working Memory

Jorge MejiasFrancisco Pascoa dos SantosParva AlavianRares Dorcioman

Jorge Mejias1, Francisco Pascoa dos Santos1, Parva Alavian1, Rares Dorcioman1; 1University of Amsterdam

Abstract

Digital brain models, or whole-brain models, are able to incorporate part of the complex brain connectivity obtained from neuroimaging scans, and are usually used to replicate brain activity dynamics. Their capacity to explain even basic cognitive properties is, however, extremely limited. In this keynote, we will present recent advances in which digital brain model are built to incorporate basic capabilities to perform working memory tasks. These models are biophysically oriented and constitute a proof of concept to be expanded in future studies towards more computationally powerful implementations. We will cover examples of models for the human and macaque brains, exploring how these models illustrate a paradigm shift in working memory: from considering it a local process occurring only in prefrontal cortex, to a distributed mechanism which involves multiple cooperating regions across large portions of the brain. We will highlight existing experimental evidence for this new paradigm as well as testable predictions for experimental neuroscientists.

Tutorial Outline

The goal of this tutorial is to replicate the main results of distributed working memory models (as presented in Mejias and Wang, eLife 2022) to gain a better and hands-on understanding of the distributed working memory theory (Christophel et al. TiCS 2017). The tutorial’s ideal audience are computational neuroscience researchers  (Master, PhD, postdoc, professors) interested in how cognitive functions could be embedded in biologically realistic brain networks.

We will start the tutorial by focusing on the basic mathematical elements of the model for a single brain region, along with its relevant parameters (15 min). We will then introduce a first hands-on exercise in which attendees will simulate the dynamics of a single brain region, and gain intuition on how this model can display elevated neural firing associated with working memory (20 min). The tutorial continues with a careful overview of relevant neuroanatomical data, such as the brain connectivity (10 min) and describe how to use this data to build our digital brain model (20 min). The closing activity will be a second hands-on exercise in which attendees will simulate a full digital brain model for a simple working memory task, replicating the main result of the core article (20 min) and simulating the effect of brain lesions (15 min). The total time of the tutorial will be around 1h 35min, with plenty of time to address issues and questions along the session. The tutorial will use Python (Jupyter notebooks) and standard Python libraries like NumPy to run.

Community Event – Universality and Idiosyncrasy of Perceptual Representations

Community Event

Wednesday, August 13, 10:00 am – 12:00 pm, Room C1.03

Universality and Idiosyncrasy of Perceptual Representations

Ev FedorenkoNikolaus Kriegeskorte

Moderators: Evelina Fedorenko1, Nikolaus Kriegeskorte2; 1Massachusetts Institute of Technology, 2Columbia University

Mick BonnerEghbal HosseiniBrian Cheung

Proponents: Mick Bonner1, Eghbal Hosseini2, Brian Cheung2; 1Johns Hopkins University, 2Massachusetts Institute of Technology

Jenelle FeatherAlex WilliamsTal Golan

Critics: Jenelle Feather1, Alex Williams2, Tal Golan3; 1Carnegie Mellon University, 2Flatiron Institute and New York University, 3Ben-Gurion University of the Negev

Abstract

One of the premises of current cognitive computational modeling is that not all neural network models are equally aligned with the neural circuit under investigation. Models trained on different tasks or datasets or employing different architectures acquire distinct representations, and the idiosyncratic aspects of these representations (i.e., model-specific features) vary in their alignment with biological representations. This premise motivates the systematic benchmarking of various neural networks against the brain, with the aim of approaching brain-aligned models. However, some recent studies suggest the opposite: distinct neural networks learn the same representations. Furthermore, according to the “universal representation hypothesis,” components of representations shared across neural networks are also shared with humans, whereas any idiosyncratic components—specific to individual models—are not shared with humans. The implications are significant: If the “universal representation hypothesis” holds, model comparison is futile. This event brings together proponents and critics of the universal representation hypothesis. Together, we will consider the following questions: Are neural network representations universal or do they also have nonshared features that reflect the architecture, objective, or learning rule? Are features not shared among artificial neural network representations necessarily misaligned with the brain? What empirical tests would adjudicate this debate? And what should cognitive computational neuroscience look like under each hypothesis?

Session Plan

Do all neural network models converge to a universal representation, or do their internal representations differ in ways that are meaningful for understanding the brain? This session will explore the universality and idiosyncrasy of representations in artificial and biological neural systems. The moderators will provide the necessary scientific background and define the core questions at the heart of the debate. These questions will be addressed through a series of short talks presenting contrasting perspectives, followed by a panel discussion and open audience Q&A.

Participants will gain a clearer understanding of what is meant by universal and idiosyncratic representations, why this distinction matters for cognitive computational neuroscience, and how it may shape future research. The session will examine whether model-specific components are necessarily misaligned with the brain and explore empirical criteria for adjudicating between the universality and idiosyncrasy hypotheses.

Community Event – The Algonauts Project 2025 Challenge

Community Event

Time and Room TBA

The Algonauts Project 2025 Challenge

Alessandro Gifford, Domenic Bersch, Marie St-Laurent, Basile Pinsard, Julie Boyle, Lune Bellec, Aude Oliva, Gemma Roig, Radoslaw Cichy

The Algonauts Project, first launched in 2019, is on a mission to bring biological and machine intelligence researchers together on a common platform to exchange ideas and pioneer the intelligence frontier. Inspired by the astronauts’ exploration of space, “algonauts” explore human and artificial intelligence with state-of-the-art algorithmic tools, thus advancing both fields.

The Algonauts Project 2025 challenge focuses on predicting responses in the human brain as participants perceive complex multimodal naturalistic movies. To enable data-hungry modeling, the challenge runs on data from CNeuroMod (https://www.cneuromod.ca/), the largest suitable brain dataset available–almost 80 hours of neural recordings for each of 4 human participants. To ensure the robustness and relevance of results, the challenge features a model selection process based on out-of-distribution evaluation.

During the first part of this session, the Algonauts project and challenge will be introduced, followed by talks by this year’s challenge winners. The second part of the session will be a panel discussion on challenges in cognitive computational neuroscience moderated by Alessandro Gifford, including the participation of Andreas Tolias, Fabian Sinz, Martin Schrimpf, Lune Bellec, and Radoslaw Cichy. The audience participation in the panel discussion is encouraged through both in-person contributions or digital engagement.

More information at https://algonautsproject.com/

Keynote & Tutorial – Uncovering algorithms of visual cognition with multilevel computational theories

Keynote & Tutorial

Keynote: Thursday, August 14, 1:45 – 3:45 pm, Room A0.01
Tutorial: Thursday, August 14, 4:15 – 6:00 pm, Room C1.04

Uncovering algorithms of visual cognition with multilevel computational theories

Ilker YildirimMario BelledonneWenyan Bi

Ilker Yildirim1, Mario Belledonne1, Wenyan Bi1; 1Yale University

Abstract

There lies a great distance between the incoming sensory inputs and the percepts and thoughts we experience about the world, including rich representations of objects, places, agents, and events. What are the formats of these representations? How are they inferred from sensory inputs? And how do they support the rest of cognition, including reasoning and decision-making? This K&T will introduce a reverse-engineering framework to address these questions: Multilevel Computational Theories (MCTs). At the core, MCTs hypothesize that the brain builds and manipulates ‘world models’ — i.e., structure-preserving, behaviorally efficacious representations of the physical world. By leveraging advancements in probabilistic computing, computer graphics, dynamical systems, and machine learning, MCTs offer sophisticated, richly testable algorithmic formulations of visual cognition, thus penetrating neural and psychological phenomena. For pedagogical reasons, the talk and the tutorial will center around a case study of modeling the visual perception of physical scenes. The talk will present a computational model of soft object perception as probabilistic inference over a simulation-based generative model of “soft-body dynamics”. The model takes as input cloth animations, computes a posterior distribution over the physical properties of the cloth (stiffness, mass) and the scene (e.g., wind), and makes a decision by matching cloths over an individual physical property. With these components,  the model explains human visual processing across both psychophysical and fMRI experiments. The tutorial will cover the core components of this model in a simpler physical domain, introducing the audience to the toolkit necessary to implement such a model — probabilistic programming and physics-engine-based generative models.

Tutorial Outline

This tutorial will introduce a reverse-engineering approach, Multilevel Computational Theories (MCTs), to the CCN community, broadening the modeling toolbox beyond the more common deep learning approaches. The tutorial will have a pedagogical focus on the domain of the visual perception of physical scenes.

This tutorial will be most useful to researchers interested in building computational models; even though our focus will be on perception and visual cognition, MCTs have broader applicability beyond these domains. Instances of MCTs can make contact with quantitative psychophysics as well as neuroscience experiments (as the talk will demonstrate). Therefore, audience members using both behavioral and neural approaches will benefit from the tutorial. The most closely relevant technical background for the specific model we will cover is familiarity with Bayesian inference and programming experience in Julia or Python.

The flow of the tutorial will be as follows.

  1. Introduce the probabilistic programming package Gen with a toy inference problem.  This will include coding a simple generative model, introducing the key data structure “trace” of Gen, and a simple inference procedure.
  2. Introduce the PhysGen package, which provides an interface for safely implementing simulation-based generative models in Gen. 
  3. Implement a world model using the PhysGen: a simulation-based generative model of a ball bouncing on a surface. This generative model involves prior distributions over physical variables such as elasticity and mass, and the initial position and velocity of the ball.
  4. Implement an approximate Bayesian inference procedure to condition this world model on sensory inputs using the particle filtering algorithm. This algorithm approximates a posterior distribution — i.e., a perceived world model. 
  5. Implement a decision-making module that transforms the perceived world model into a decision in a 2AFC match-to-sample task. 

The tutorial will be implemented using Jupyter notebooks and Google Colab. A current implementation can be found here: https://github.com/CNCLgithub/Algorithms-of-the-Mind/blob/main/labs/lab-06/bouncing-balls.ipynb. We will ensure that the tutorial is self-contained and the material is broadly accessible, providing any necessary background accordingly.

Keynotes & Tutorials

Keynotes & Tutorials

Keynotes: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorials: Thursday, August 14, 4:15 - 6:00 pm, Room TBA

Language: In search of a neural code

Organizers: Jean-Remi King, Lucy Zhang, Linnea Evanson, Hubert Banville

Uncovering algorithms of visual cognition with multilevel computational theories

Organizers: Ilker Yildirim, Mario Belledonne, Wenyan Bi

Digital Brain Models for Working Memory

Organizers: Jorge Mejias & Francisco Pascoa

 

 

 

 

 

  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

©2025 CCN. All rights reserved.  Contact us at

Cognitive Computational Neuroscience