Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall

Disentangling redundant and synergistic interactions in the alignment between auditory brains and machines

Christian Ferreyra1, Marie Plegat1, Giorgio Marinato1, Maria de Araújo Vitória, Michele Esposito2, Elia Formisano2, Thierry Artières3, Bruno L. Giordano4; 1Université d'Aix-Marseille, 2Maastricht University, 3Aix Marseille University, 4CNRS

Presenter: Christian Ferreyra

Artificial neural networks (ANNs) have become increasingly useful for modeling how the brain builds representations from the natural world, yet the nature of their representational alignment with dynamic brain activity remains underexplored. Here, we introduce an information-theoretic framework to decompose representational geometries into redundant and synergistic components using partial information decomposition (PID). Combining magnetoencephalography (MEG) recordings from participants listening to natural sounds, and two sound-processing ANNs with categorical (CatDNN) and continuous (SemDNN) semantic outputs, we analyze time-varying brain-model alignment for two optimized stimulus sets. For low-agreement stimulus sets, where mutual information between models is minimized, SemDNN reveals higher mutual information with brain activity. PID further shows greater redundancy and synergy for SemDNN, suggesting sustained temporal integration of intermediate semantic features that can potentially afford a more accurate readout of the auditory environment. These results highlight the value of representational decomposition for detailing shared and complementary components of the alignment between brains and ANNs.

Topic Area: Visual Processing & Computational Vision

Extended Abstract: Full Text PDF