Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall
Neural Dimensionality and Temporal Dynamics of Visual Representations
Zirui Chen1, Leyla Isik1, Michael Bonner1; 1Johns Hopkins University
Presenter: Zirui Chen
Understanding the temporal dynamics of visual representations in the brain is a fundamental challenge. While research has shown that neural time series data contain rich information where many visual features can be decoded, less is known about how the stimulus representation itself evolves over time and how these dynamics are related to feature decodability. Here, we investigated these questions using EEG recordings from subjects viewing everyday objects. We found that the dimensionality of stimulus-related variance rapidly increases to nearly full rank after stimulus onset and is sustained for several hundred milliseconds. During this time, the underlying representations oscillate, with every latent dimension undergoing multiple sign flips. Interestingly, the time course of feature decodability closely corresponds to the window of high-dimensionality, and temporal-generalization patterns of above- and below-chance decoding accuracies correspond to sign flips of the representational dimensions. Furthermore, we found that behavioral features and neural network representations each capture only a subset of the neural dimensionality, suggesting that significant portions of neural activity represent information not accounted for by current measures. Together, our findings show that natural images elicit rapidly fluctuating high-dimensional representations, encoding rich sensory information that has yet to be explained by state-of-the-art behavioral and computational models.
Topic Area: Visual Processing & Computational Vision
Extended Abstract: Full Text PDF