Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall

Evidence for Shepard’s Law in the Representational Spaces of Deep Vision Models

Daniel L. Carstensen1, Steven M Frankland2, Serra E Favila1; 1Brown University, 2Dartmouth College

Presenter: Daniel L. Carstensen

Shepard’s (1987) universal law of generalization states that generalization strength decays as a concave function of stimulus distance in psychological space. While widely supported in biological systems, its relevance to artificial neural networks remains unclear. We tested this law across 26 diverse deep vision models using human similarity judgments of naturalistic images. Across models, embedding distances produced concave generalization gradients and aligned closely with human psychological spaces. To examine the role of semantic content, we analyzed model gradients across network depth and compared gradient shapes to human-derived benchmarks. Language-aligned models most closely resembled human data, suggesting semantic representations contribute to model-human alignment. Our findings extend Shepard’s law to modern artificial systems, providing further evidence for its universality. They also highlight deep vision models as compelling proxies for psychological space, providing a novel framework for assessing representational alignment between artificial and human cognition.

Topic Area: Visual Processing & Computational Vision

Extended Abstract: Full Text PDF