Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Contributed Talk Session: Thursday, August 14, 11:00 am – 12:00 pm, Room C1.04
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Neural computations underlying human social evaluations from visual stimuli
Manasi Malik1, Minjae J. Kim1, Tianmin Shu1, Shari Liu, Leyla Isik1; 1Johns Hopkins University
Presenter: Manasi Malik
Humans easily make social evaluations from visual scenes, but the computational mechanisms in the brain that support this ability remain unknown. Here, we test two hypotheses raised by prior work: one proposes that people recognize and evaluate social interactions by inverting a generative model of the world and reasoning about others’ mental states; the other suggests that this process relies on bottom-up visual perception without explicit mental state inference. In this preregistered study, we collected fMRI responses from participants watching videos of social interactions and compared these neural responses to computational models that instantiate these different theories: a generative inverse planning model (SIMPLE) and a relational bottom-up visual model (SocialGNN). Using representational similarity analysis, we find that perceptual social processing regions — such as regions in pSTS and LOTC — are significantly similar to SocialGNN, even after controlling for SIMPLE and low-level motion features. Further, a non-relational visual control model failed to explain neural responses in these regions. SIMPLE also explained neural responses in similar regions, but effects were weaker and largely accounted for by SocialGNN. These findings suggest that regions in pSTS and LOTC may support relational bottom-up computations during social interaction recognition.
Topic Area: Reward, Value & Social Decision Making
Extended Abstract: Full Text PDF