Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Contributed Talk Session: Friday, August 15, 11:00 am – 12:00 pm, Room C1.04
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Models trained on infant views are more predictive of infant visual cortex

Cliona O'Doherty1, Áine Travers Dineen2, Anna Truzzi, Graham King, Enna-Louise D'Arcy, Chiara Caldinelli3, Tamrin Holloway, Eleanor Molloy, Rhodri Cusack4; 1University of Dublin, Trinity College, 2Trinity College Dublin, 3University of Cincinnati, 4Trinity College, Dublin

Presenter: Cliona O'Doherty

The perspective of a developing infant offers unique potential when training a neural network. Egocentric video from a young child can provide ample data for representation learning in vision and language models, to only some expense of model performance. It is known that pre-trained DNNs optimised for object classification are good models of the ventral visual stream in adults, but would the same be true prior to the onset of classification behaviour? Here, we explore whether models trained on infant views are more predictive of category responses in infant ventrotemporal cortex (VTC). Using awake fMRI in a large cohort of 2-month-olds, we find that - unlike adults - features from neural networks pre-trained on infant headcam data are better models of infant VVC.

Topic Area: Visual Processing & Computational Vision

Extended Abstract: Full Text PDF