Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Voxel-wise Encoding of Visual and Social Meaning During Silent Movie Viewing in Deaf and Hearing Participants.
Maria Zimmermann1, Marcin Szwed, Leyla Isik1, Marina Bedny1; 1Johns Hopkins University
Presenter: Maria Zimmermann
In deaf individuals, higher auditory regions such as the superior temporal cortex are thought to be repurposed for visual processing. In previous study we showed that these regions are recruited for processing rich visual meaning during silent films (Zimmermann et al., 2024). To investigate which specific features drive this reorganization, we applied a voxel-wise encoding model to fMRI data from deaf and hearing participants as they watched a silent animated movie. The model included a range of visual, social, and affective features. It significantly explained variance across the whole brain in both groups, revealing patterns consistent with prior findings. In regions that showed differential intersubject synchronization between groups, we observed higher prediction performance scores (R² ) in deaf participants. This was particularly evident for social interaction features in the right superior temporal sulcus (STS) and Theory of Mind features in the right posterior STS. These findings suggest that reorganization in the temporal cortex in deaf individuals may reflect an expansion of nearby visual and social feature representations into formerly auditory regions.
Topic Area: Language & Communication
Extended Abstract: Full Text PDF