Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall
Evolution of Low-Level and Texture Human-CLIP Alignment
Pablo Hernández-Cámara1, Jose Manuel Jaén-Lorites2, Jorge Vila-Tomás1, Jesus Malo3, Valero Laparra4; 1Universidad de Valencia, 2Universidad Politécnica de Valencia, 3Universitat de Valencia, 4Universitat de València
Presenter: Pablo Hernández-Cámara
During the training of multi-modal models like CLIP, we observed an intriguing phenomenon: the correlation with low-level human image quality assessments peaks in the early epochs before gradually declining. This study investigates this observation and seeks to understand its causes through two key factors: shape-texture bias alignment and classification accuracy drop under noise. Our findings suggest that CLIP initially learn low-level visual features, enhancing its alignment with low-level human perception but also increasing its sensitivity to noise and its texture bias. As training progresses, the model shifts toward more abstract shape-based representations, improving noise robustness but reducing alignment with low-level human perception. These results suggest that these factors shared an underlying learning mechanism and provide new insights into optimizing the trade-off between perceptual alignment and robustness in vision-language models.
Topic Area: Visual Processing & Computational Vision
Extended Abstract: Full Text PDF