Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Cognitive and Neural Mechanisms of Social Behavior

Contributed Talk Session: Thursday, August 14, 11:00 am – 12:00 pm, Room C1.04

Arousal dynamics predict transitions in engagement state

Talk 1, 11:00 am – Philippa A. Johnson1, Sander Nieuwenhuis, Anne Urai1; 1Leiden University

Presenter: Philippa A. Johnson

When completing a task for a prolonged period, animals switch between being engaged and disengaged in what they are doing. What neural and physiological processes trigger these behavioural state transitions? We estimate the engagement state of mice using a hidden Markov model of response times and found that intermediate arousal was associated with more engagement in the task. Additionally, we show that changes in arousal predict subsequent changes in behavioural state. To explain this, we propose a double-well model, in which arousal causes behavioural state transitions by reshaping the attractor landscape of population neural activity. These results highlight a possible mechanism of arousal-related changes in behaviour and suggest the presence of early warning signals for behavioural state switches.

Full Text PDF

Neural Representation of Social Relationship Graphs through Multidimensional Modeling of Dynamic Social Interactions

Talk 2, 11:10 am – Dasom Kwon1, Eshin Jolly, Luke Chang2, Won Mok Shim3; 1Sung Kyun Kwan University, 2Dartmouth College, 3SungKyunKwan University (SKKU)

Presenter: Dasom Kwon

Social interactions continuously evolve, shaping our understanding of interpersonal relationships. Yet, how does the human brain construct relational knowledge from such dynamics? Prior research has primarily relied on unidimensional co-occurrence metrics, failing to capture the complexity of real-world social dynamics. Here, we introduce a multidimensional modeling framework characterizing dynamic social interactions as valence-weighted graphs. Using fMRI data collected during movie-viewing and subsequent relationship rating tasks, we show that distributed brain regions track dynamic interactions and represent social relationship graphs. These representations were preserved across tasks, with higher dimensionality observed in the medial prefrontal cortex (mPFC) and lower dimensionality in the posterior superior temporal sulcus (pSTS). These findings bridge online social perception and structured relational knowledge, elucidating how the brain organizes dynamic social interactions into multi-layered interpersonal relationship graphs.

Full Text PDF

A computational model of reward learning and habits on social media

Talk 3, 11:20 am – Georgia Turner1, Lukas J. Gunschera, Shashanka Subrahmanya2, Aadesh Salecha3, Johannes C. Eichstaedt, Stefano Palminteri4, Amy Orben; 1University of Cambridge, 2University of Southern California, 3Stanford University, 4Ecole Normale Supérieure – PSL

Presenter: Georgia Turner

Social media have fundamentally transformed how we live and communicate. However, the methods to study how our cognitive systems interact with technology platforms are very limited. Computational modelling represents a new avenue to uncover the finegrained cognitive processes driving social media behaviour. Here, we develop a novel computational model of real-world social media posting data, adapted from the animal reward learning literature. We fit seven models to a Twitter dataset (n=2,696 users), including a preregistered replication, and show that a hybrid goal-directed and habitual reward-seeking process underlies social media posting behaviour. More frequent posters show signs of more habitual behaviour. Our model paves the way for large-scale investigation into the cross-species cognitive processes motivating social media behaviours, and their downstream impacts on individuals and society.

Full Text PDF

Hierarchical systems in the default mode network when reasoning about self and other mental states

Talk 4, 11:30 am – Isaac Ray Christian1, Samuel Nastase1; 1Princeton University

Presenter: Isaac Ray Christian

Humans spend time contemplating the minds of others. But this ability is not limited to external agents – we also turn the lens for reading minds inward, reflecting on our own thoughts, emotions, and sense of self. Some processes involved in reasoning about minds may rely on shared mechanisms, while others may be specific to the agent under consideration. Using fMRI and multi-voxel pattern analysis, we found that ventral regions in the DMN selectively decoded mental state inference patterns for self, but not other, whereas a region in posterior cingulate cortex differentiated the target of mental state inference. Using a cross-classification analysis, we also found patterns in the dorsomedial prefrontal cortex, ventromedial prefrontal cortex, and right temporoparietal junction were sensitive to mental state reasoning in general, regardless of the target agent. These findings highlight one process reflecting reasoning specific to the agent and another reflecting the reasoning process itself.

Full Text PDF

Neural computations underlying human social evaluations from visual stimuli

Talk 5, 11:40 am – Manasi Malik1, Minjae J. Kim1, Tianmin Shu1, Shari Liu, Leyla Isik1; 1Johns Hopkins University

Presenter: Manasi Malik

Humans easily make social evaluations from visual scenes, but the computational mechanisms in the brain that support this ability remain unknown. Here, we test two hypotheses raised by prior work: one proposes that people recognize and evaluate social interactions by inverting a generative model of the world and reasoning about others’ mental states; the other suggests that this process relies on bottom-up visual perception without explicit mental state inference. In this preregistered study, we collected fMRI responses from participants watching videos of social interactions and compared these neural responses to computational models that instantiate these different theories: a generative inverse planning model (SIMPLE) and a relational bottom-up visual model (SocialGNN). Using representational similarity analysis, we find that perceptual social processing regions — such as regions in pSTS and LOTC — are significantly similar to SocialGNN, even after controlling for SIMPLE and low-level motion features. Further, a non-relational visual control model failed to explain neural responses in these regions. SIMPLE also explained neural responses in similar regions, but effects were weaker and largely accounted for by SocialGNN. These findings suggest that regions in pSTS and LOTC may support relational bottom-up computations during social interaction recognition.

Full Text PDF