Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

A Goal-driven Model of Visual Search in Natural Scenes Replicates Human Behavior While Relying on Similar Neural Representations

Motahareh Pourrahimi1, Irina Rish2, Pouya Bashivan1; 1McGill University, 2University of Montreal

Presenter: Motahareh Pourrahimi

Visual search, the process of locating a specific item among multiple objects, is a key paradigm in studying visual attention. Due to eccentricity-dependent visual acuity, many animals constantly selectively sample from their environment by moving their gaze location, leading to the formation of search scanpaths, a hallmark of visual search behavior. While much is known about the brain networks involved in visual search, our understanding of the neural computations driving this behavior is limited, leading to challenges in simulating such behavior in-silico. To address this gap, we trained an image-computable artificial neural network to perform visual search from pixels in natural scenes. Model’s search scanpaths (spatiotemporal sequence of fixations) were highly consistent with those of humans. It captured the human information integration behavior and relied on neural representations similar to those observed in the primate fronto-parietal attentional control network. Examining the model’s latent space revealed how it uses its internal state to construct and update a priority map of the visual space, enabling efficient visual search. Our model provides concrete predictions about the neural computations underlying visual search in the primate brain.

Topic Area: Object Recognition & Visual Attention

Extended Abstract: Full Text PDF