Community Event

Wednesday, August 13, 10:00 am - 12:00 pm, Room TBA

Universality and Idiosyncrasy of Perceptual Representations

Ev FedorenkoNikolaus Kriegeskorte

Moderators: Evelina Fedorenko1, Nikolaus Kriegeskorte2; 1Massachusetts Institute of Technology, 2Columbia University

Mick BonnerEghbal HosseiniBrian Cheung

Proponents: Mick Bonner1, Eghbal Hosseini2, Brian Cheung2; 1Johns Hopkins University, 2Massachusetts Institute of Technology

Jenelle FeatherAlex WilliamsTal Golan

Critics: Jenelle Feather1, Alex Williams2, Tal Golan3; 1Carnegie Mellon University, 2Flatiron Institute and New York University, 3Ben-Gurion University of the Negev

Abstract

One of the premises of current cognitive computational modeling is that not all neural network models are equally aligned with the neural circuit under investigation. Models trained on different tasks or datasets or employing different architectures acquire distinct representations, and the idiosyncratic aspects of these representations (i.e., model-specific features) vary in their alignment with biological representations. This premise motivates the systematic benchmarking of various neural networks against the brain, with the aim of approaching brain-aligned models. However, some recent studies suggest the opposite: distinct neural networks learn the same representations. Furthermore, according to the “universal representation hypothesis,” components of representations shared across neural networks are also shared with humans, whereas any idiosyncratic components—specific to individual models—are not shared with humans. The implications are significant: If the “universal representation hypothesis” holds, model comparison is futile. This event brings together proponents and critics of the universal representation hypothesis. Together, we will consider the following questions: Are neural network representations universal or do they also have nonshared features that reflect the architecture, objective, or learning rule? Are features not shared among artificial neural network representations necessarily misaligned with the brain? What empirical tests would adjudicate this debate? And what should cognitive computational neuroscience look like under each hypothesis?

Session Plan

Do all neural network models converge to a universal representation, or do their internal representations differ in ways that are meaningful for understanding the brain? This session will explore the universality and idiosyncrasy of representations in artificial and biological neural systems. The moderators will provide the necessary scientific background and define the core questions at the heart of the debate. These questions will be addressed through a series of short talks presenting contrasting perspectives, followed by a panel discussion and open audience Q&A.

Participants will gain a clearer understanding of what is meant by universal and idiosyncratic representations, why this distinction matters for cognitive computational neuroscience, and how it may shape future research. The session will examine whether model-specific components are necessarily misaligned with the brain and explore empirical criteria for adjudicating between the universality and idiosyncrasy hypotheses.