Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Parametric control along the encoding axes of IT neurons uncovers hidden differences in model-brain alignment
Jacob S. Prince1, Binxu Wang1, Akshay Vivek Jagadeesh2, Thomas Fel1, Emily Lo, George A. Alvarez1, Margaret Livingstone1, Talia Konkle1; 1Harvard University, 2Harvard Medical School, Harvard University
Presenter: Jacob S. Prince
As model-brain alignment scores increasingly saturate under current assessment methods, new approaches are needed to test whether there are actually hidden differences in how well models capture biological feature tuning. To this end, we introduce a paradigm for comparing deep encoding models based on their ability to *control* neural responses along their hypothesized encoding axes. Using recordings from macaque inferotemporal cortex, we compared two DNN-based encoding models: a standard ResNet-50 and an adversarially robust variant. These models achieved comparable performance in predicting neural responses over a wide range of natural images. However, we found they differed substantially when subjected to a test of “parametric control.” Leveraging an explainable AI technique called feature accentuation, we synthesized image sets that varied systematically in precise intervals along each encoding axis, based on the hierarchical computations of each model. We found that accentuated stimuli from the robust model achieved superior control of neural firing. We then synthesized “controversial” stimuli that further validated the brain alignment of RN50-robust over the baseline model. Our framework offers a new means to arbitrate between models, requiring a more precise characterization of feature tuning in targeted local regions of image space.
Topic Area: Object Recognition & Visual Attention
Extended Abstract: Full Text PDF