Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Contributed Talk Session: Friday, August 15, 11:00 am – 12:00 pm, Room C1.03
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Collaborative Encoding of Visual Working Memory

Huang Ham1, Evan Russek1, Thomas L. Griffiths1, Natalia Vélez1; 1Princeton University

Presenter: Huang Ham

Collaboration helps humans surmount individual cognitive limitations by distributing information over many minds. However, figuring out when and how to collaborate is not trivial. This study examines whether dyads split up information in a collaborative visual working memory task when doing so improves performance. Participants (N=356) memorized grids of 4, 16, or 36 images both alone and with a partner. We used a visual working memory model to estimate how much dyads would benefit from splitting up a grid of images, rather than each memorizing the grid independently. Our model predicts that participants should split up grids that are neither too easy nor too difficult to benefit from collaboration. Indeed, participants tacitly adopted conventions to split up medium and large grids---and were more accurate in these conditions when they worked together than when they acted alone---but not small grids where individual performance was already at ceiling. Our work provides a first step to understand how decisions about when and how to collaborate are shaped by the adaptive use of cognitive resources.

Topic Area: Memory, Spatial Cognition & Skill Learning

Extended Abstract: Full Text PDF