Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
Cross-Subject Brain Decoding for Naturalistic Movie Reconstruction
Myeonggyo Jeong1; 1Sung Kyun Kwan University
Presenter: Myeonggyo Jeong
Understanding how the brain processes dynamic visual stimuli remains a central challenge in neuroscience. Although recent AI-based methods have succeeded in reconstructing static images from fMRI data, decoding continuous movie scenes entails another complexity layer to navigate spatiotemporal brain activities that are distinctly represented across different individuals. Here, we propose a multi-subject fMRI decoding framework to combine inter-subject functional alignment—which uncovers shared neural representations among participants—with subject-specific tokens to tag idiosyncrasy of an individual’s functional dynamics in learning fMRI representation. By integrating these two complementary techniques, our method simultaneously achieves robust cross-subject generalization and person-optimized modeling, requiring only minimal fine-tuning. Moreover, we also employed a whole-brain Transformer to link fMRI signals to the CLIP image-text embeddings, preparing enriched brain-video mapping input representation for a subsequent video generation. Finally, we employed AnimateDiff and FreeInit, the two up-to-date algorithms to maximize temporal coherency across reconstructed frames. Advancing fMRI movie decoding techniques holds a promise to develop a quantitative mean to scrutinize brain dynamics underlying naturalistic visual experiences.
Topic Area: Visual Processing & Computational Vision
Extended Abstract: Full Text PDF