Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
Deep neural networks can learn generalizable same-different visual relations
Alexa R. Tartaglini1, Sheridan Feucht2, Michael A. Lepori3, Wai Keen Vong4, Charles Lovering5, Brenden Lake6, Ellie Pavlick3; 1Stanford University, 2Northeastern University, 3Brown University, 4Facebook, 5Kensho, 6New York University
Presenter: Alexa R. Tartaglini
Although deep neural networks can achieve human-level performance on many object recognition benchmarks, prior work suggests that these same models fail to learn simple abstract relations, such as determining whether two objects are the same or different. Much of this prior work focuses on training convolutional neural networks to classify images of two same or two different abstract shapes, testing generalization on within-distribution stimuli. In this article, we comprehensively study whether deep neural networks can acquire and generalize same-different relations both within and out-of-distribution using a variety of architectures, forms of pretraining, and fine-tuning datasets. We find that certain pretrained transformers can learn a same-different relation that generalizes with near perfect accuracy to out-of-distribution stimuli. Furthermore, we find that fine-tuning on abstract shapes that lack texture or color provides the strongest out-of-distribution generalization. Our results suggest that, with the right approach, deep neural networks can learn generalizable same-different visual relations.
Topic Area: Visual Processing & Computational Vision
proceeding: Full Text on OpenReview