Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Contributed Talk Session: Friday, August 15, 11:00 am – 12:00 pm, Room C1.04
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Rethinking Representational Alignment: Linear Probing Fails to Identify the Ground-Truth Model
Itamar Avitan1, Tal Golan1; 1Ben Gurion University of the Negev
Presenter: Itamar Avitan
Linearly transforming stimulus representations of deep neural networks yields performant models of human similarity judgments. But can the predictive accuracy of such models identify genuine representational alignment? We conducted a model recovery study to test this empirically. We aligned 20 diverse pretrained models to 4.2 million human judgments from the THINGS-odd-one-out dataset, generated synthetic data conforming to the predictions of one of the models, and tested whether this model would re-emerge as the best predictor of the simulated data, as measured by linear probing. We found that even with large datasets, linear probing can systematically fail to recover ground-truth models. Our findings call for a reconsideration of the flexibility of model-human alignment metrics and the design of model comparison studies. $\textbf{Keywords:}$ Representational Alignment ; Model Recovery; Deep Neural Networks; Similarity Judgments
Topic Area: Visual Processing & Computational Vision
Extended Abstract: Full Text PDF