Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Probabilistic representations fail to emerge in task-optimized neural networks
Ishan Kalburge1, Máté Lengyel1; 1University of Cambridge
Presenter: Ishan Kalburge
While mounting evidence indicates that human decision-making follows Bayesian principles, the underlying neural computations remain unknown. Recent work proposes that probabilistic representations arise naturally in neural networks trained with non-probabilistic objectives \cite{OrhanMa}. However, prior analyses did not explicitly examine whether the neural code merely re-represents inputs or performs useful transformations that prioritize three criteria for a probabilistic representation: generalization, invariance, and representational simplicity \cite{WalkerEtAlReview, PohlEtAl}. Using a novel probing-based approach, we show that training feed-forward networks to perform cue combination and coordinate transformation without probabilistic objectives leads to Bayesian posteriors being decodable from their hidden layer activities. However, we also show that these networks fail the generalization, invariance and representational simplicity criteria: they do not generalize out-of-sample, compress their inputs, or develop easily decodable representations. Therefore, it remains an open question under what conditions truly probabilistic representations emerge in neural networks.
Topic Area: Predictive Processing & Cognitive Control
Extended Abstract: Full Text PDF