Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Computational models of vision do not explain the effect of expertise on neural processing of visual Braille

Filippo Cerpelloni1, Olivier Collignon, Hans Op de Beeck1; 1KU Leuven

Presenter: Filippo Cerpelloni

The human visual stream adapts to process letters and words at different processing stages (Vinckier et al., 2007), even when the stimuli do not share canonical script features, like Braille (Cerpelloni et al., 2024). This supports an interactive account of the Visual Word Form Area (VWFA). Here we expand these findings to test the organization of peculiar visual features in computational models. By training a benchmark convolutional neural network (AlexNet) to classify words in the Latin script (literacy) and then in the Braille script (expertise), we model the processing of reading visual Braille and explore the network’s representations at different stages. We observe a similar degree of clustering between models before and after training on Braille. The lack of alignment between the visual processing of the computational models and the effect of expertise highlighted by neural data suggests that the fundamental processing of reading cannot be fully explained by the visual characteristics of the script, but necessarily relies on other mechanisms, among which language connections.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF