Keynote & Tutorial
Keynote: Thursday, August 14, 1:45 - 3:45 pm, Room TBA
Tutorial: Thursday, August 14, 4:15 - 6:00 pm, Room TBA
Uncovering algorithms of visual cognition with multilevel computational theories
Ilker Yildirim1, Mario Belledonne1, Wenyan Bi1; 1Yale University
Abstract
There lies a great distance between the incoming sensory inputs and the percepts and thoughts we experience about the world, including rich representations of objects, places, agents, and events. What are the formats of these representations? How are they inferred from sensory inputs? And how do they support the rest of cognition, including reasoning and decision-making? This K&T will introduce a reverse-engineering framework to address these questions: Multilevel Computational Theories (MCTs). At the core, MCTs hypothesize that the brain builds and manipulates ‘world models’ — i.e., structure-preserving, behaviorally efficacious representations of the physical world. By leveraging advancements in probabilistic computing, computer graphics, dynamical systems, and machine learning, MCTs offer sophisticated, richly testable algorithmic formulations of visual cognition, thus penetrating neural and psychological phenomena. For pedagogical reasons, the talk and the tutorial will center around a case study of modeling the visual perception of physical scenes. The talk will present a computational model of soft object perception as probabilistic inference over a simulation-based generative model of “soft-body dynamics”. The model takes as input cloth animations, computes a posterior distribution over the physical properties of the cloth (stiffness, mass) and the scene (e.g., wind), and makes a decision by matching cloths over an individual physical property. With these components, the model explains human visual processing across both psychophysical and fMRI experiments. The tutorial will cover the core components of this model in a simpler physical domain, introducing the audience to the toolkit necessary to implement such a model — probabilistic programming and physics-engine-based generative models.
Tutorial Outline
This tutorial will introduce a reverse-engineering approach, Multilevel Computational Theories (MCTs), to the CCN community, broadening the modeling toolbox beyond the more common deep learning approaches. The tutorial will have a pedagogical focus on the domain of the visual perception of physical scenes.
This tutorial will be most useful to researchers interested in building computational models; even though our focus will be on perception and visual cognition, MCTs have broader applicability beyond these domains. Instances of MCTs can make contact with quantitative psychophysics as well as neuroscience experiments (as the talk will demonstrate). Therefore, audience members using both behavioral and neural approaches will benefit from the tutorial. The most closely relevant technical background for the specific model we will cover is familiarity with Bayesian inference and programming experience in Julia or Python.
The flow of the tutorial will be as follows.
- Introduce the probabilistic programming package Gen with a toy inference problem. This will include coding a simple generative model, introducing the key data structure “trace” of Gen, and a simple inference procedure.
- Introduce the PhysGen package, which provides an interface for safely implementing simulation-based generative models in Gen.
- Implement a world model using the PhysGen: a simulation-based generative model of a ball bouncing on a surface. This generative model involves prior distributions over physical variables such as elasticity and mass, and the initial position and velocity of the ball.
- Implement an approximate Bayesian inference procedure to condition this world model on sensory inputs using the particle filtering algorithm. This algorithm approximates a posterior distribution — i.e., a perceived world model.
- Implement a decision-making module that transforms the perceived world model into a decision in a 2AFC match-to-sample task.
The tutorial will be implemented using Jupyter notebooks and Google Colab. A current implementation can be found here: https://github.com/CNCLgithub/Algorithms-of-the-Mind/blob/main/labs/lab-06/bouncing-balls.ipynb. We will ensure that the tutorial is self-contained and the material is broadly accessible, providing any necessary background accordingly.