Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
Neural encoding with affine feature response transforms
Lynn Le1, Nils Kimman1, Thirza Dado2, K. Seeliger3, Paolo Papale4, Antonio Lozano4, Pieter R. Roelfsema5, Marcel van Gerven6, Yağmur Güçlütürk7, Umut Güçlü7; 1Radboud University, 2Donders Institute for Brain, Cognition and Behaviour, 3Martin-Luther Universität Halle-Wittenberg, 4Netherlands Institute for Neuroscience, 5AT&T, 6Donders Institute for Brain, Cognition and Behaviour, Radboud University, 7Radboud University Nijmegen
Presenter: Lynn Le
Current linearizing encoding models that predict neural responses to sensory input typically neglect neuroscience-inspired constraints that could enhance model efficiency and interpretability. To address this, we propose a new method called affine feature response transform (AFRT), which exploits the brain's retinotopic organization. Applying AFRT to encode multi-unit activity in areas V1, V4, and IT of the macaque brain, we demonstrate that AFRT reduces redundant computations and enhances the performance of current linearizing encoding models by segmenting each neuron's receptive field into an affine retinal transform, followed by a localized feature response. Remarkably, by factorizing receptive fields into a sequential affine component with three interpretable parameters (for shifting and scaling) and response components with a small number of feature weights per response, AFRT achieves encoding with orders of magnitude fewer parameters compared to unstructured models. We show that the retinal transform of each neuron's encoding agrees well with the brain's receptive field. Together, these findings suggest that this new subset within spatial transformer network can be instrumental in neural encoding models of naturalistic stimuli.
Topic Area: Visual Processing & Computational Vision
Extended Abstract: Full Text PDF