Community Event
View Livestream
Thursday, August 14, 4:15 – 6:00 pm, Room A2.11
Representational Alignment (Re^3-Align Collaborative Hackathon)
Brian Cheung, Dota Tianai Dong, Erin Grant, Ilia Sucholutsky, Lukas Muttenthaler, Siddharth Suresh
Abstract
Both natural and artificial intelligences form representations of the world that they use to reason, make decisions, and communicate. But how do we best compare and align these representations? There have been numerous debates around the measures used to quantify representational similarity. As of now, there is little consensus on which metric is best aligned(!) with the goal of identifying similarity between systems. This community event takes a hands-on approach to this challenge through a collaborative hackathon that centers on a decades-long debate: How universal vs. variable are the representations that intelligence systems, biological and artificial, form about the world? This debate has been the target of much recent research in representation learning in machine learning, and is also receiving substantial new attention from neuroscience and cognitive science. During the hackathon, Blue Teams will work to show model universality by finding (or creating) large populations of heterogeneous models that exhibit a high degree of representational alignment. Red Teams will highlight model variability by identifying differences in representations among homogeneous populations of models that are expected to align. The event will begin with a lecture and interactive tutorial on evaluating representational similarity and concludes with a lively, open discussion.
Session Plan
This collaborative hackathon has two main goals:
- To increase the reproducibility of research on representational alignment through shared hands-on work.
- To facilitate open discussion around evaluating and controlling representational similarity, supported by participant presentations and a panel discussion.
We’ll kick off with an interactive tutorial, where organizers will provide starter code and demonstrate core techniques—extracting model activations, comparing model similarities, and analyzing stimuli that expose meaningful differences in representations. Participants will then form Red or Blue Teams, depending on their interests, and dive into exploring either model variability or universality. We’ll wrap up with a group discussion on key challenges and promising future directions in the study of representational alignment.
You can preview the website here: https://representational-alignment.github.io/hackathon/
Quick start guide & code repo: https://github.com/representational-alignment/hackathon
Feel free to begin downloading and experimenting with the materials ahead of the event!