Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall

Interpretable prediction of human fixations from behavior-derived representational dimensions

Luca Kämmer1, Alexander Kroner2, Martin N Hebart3; 1Max Planck Institute for Human cognition and brain sciences, Max-Planck Institute, 2Universität Osnabrück, 3Justus Liebig Universität Gießen

Presenter: Luca Kämmer

Eye movements are core to how humans perceive their environment, which is why understanding which properties guide fixations can help us better understand visual perception. Although there are many computational models trying to predict eye movements, they mostly aim to maximize performance without offering insights into why certain image regions draw our gaze. We addressed this question by leveraging 49 representational object dimensions that capture visual and semantic object information to predict human fixations on images. We weighted these dimensions with their relevance for an image to generate behaviorally-relevant feature maps, without specifically training on fixation data. Our approach outperformed a permutation-controlled baseline and matched the performance of a saliency model. Crucially, our predictions are interpretable, offering insights into which representational dimensions drive them. Lastly, we showed how predictive individual dimensions are of fixation in general which helps us better understand which features drive gaze allocation.

Topic Area: Object Recognition & Visual Attention

Extended Abstract: Full Text PDF