Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
Temporal dynamics of natural sounds representation in the human brain
Marie Plegat1, Giorgio Marinato1, Christian Ferreyra1, Maria Araújo Vitória, Michele Esposito2, Daniele Schon, Elia Formisano2, Bruno L. Giordano3; 1Université d'Aix-Marseille, 2Maastricht University, 3CNRS
Presenter: Marie Plegat
Acoustic and semantic representations involved in the temporal dynamics of the cerebral processing of natural sounds are often studied separately. As a consequence, we lack direct knowledge of how the human brain transforms complex acoustic waveforms into semantic representations of the acoustic environment. Here, we aimed to elucidate this process by predicting magnetoencephalographic (MEG) responses to natural sounds using acoustic, and semantic (text-based) models. Critically, we also consider two recently developed sound-processing convolutional neural networks (CNNs) that differ only in their loss function: CatDNN, which learns sound-event categories, and SemDNN, which learns continuous semantic embeddings (Word2Vec). We observe that DNNs better predict the dynamic MEG response, except at a long latency (800--1000 ms) where higher-level acoustics seems to dominate (auditory dimensions). Focusing on DNNs, we observe a potential switch from initial protoacoustic/categorical semantic representations (CatDNN, 250 ms) to more refined continuous semantic representations (SemDNN, 500--800 ms). Overall, our findings suggest limitations in the text-based modeling of the cerebral representations of natural sounds, and give a temporally resolved description of the cerebral dynamics of the acoustic-to-semantic transformation.
Topic Area: Language & Communication
Extended Abstract: Full Text PDF