Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall
Perceptogram: Interpreting Visual Percepts from EEG
Teng Fei1, Abhinav Uppal1, Ian L. Jackson1, Srinivas Ravishankar2, David Wang1, Virginia R. de Sa3; 1University of California, San Diego, 2International Business Machines, 3Halicioglu Data Science Institute, University of California, San Diego
Presenter: Virginia R. de Sa
Recent advances in EEG-based visual decoding utilize diffusion models to generate realistic images from neural activity. Typically, these methods project EEG signals into latent spaces--most commonly, Contrastive Language–Image Pretraining (CLIP)--which define visuosemantic features for subsequent image reconstruction. Prior methods rely on deep and opaque models, overlooking the neural origins of decoded information. Here, we introduce Perceptogram, a unified, interpretable framework that uses paired linear mappings between EEG signals and CLIP latents, leveraging CLIP’s inherent structure. Perceptogram achieves state-of-the-art reconstruction quality and generates latent-filtered EEG maps, isolating neural activity relevant to specific visual attributes. These maps reveal clear spatiotemporal organization: $ \approx 100\ \text{ms} $ post-stimulus, lateral posterior negativity encodes smooth textures and blue hues, while medial negativity captures textured images, red hues, and food semantics; $ \approx 180\ \text{ms} $, lateral negativity signals animate objects. By identifying these distinct neural signatures, Perceptogram transparently delineates how visual features—from basic textures and colors to high-level object categories—are temporally and spatially represented in the brain.
Topic Area: Visual Processing & Computational Vision
Extended Abstract: Full Text PDF