Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall

The BabyView dataset: High-resolution egocentric videos of infants’ and young children’s everyday experiences

Bria Lorelle Long1, Robert Z. Sparks2, Violet Xiang2, Stefan Stojanov3, ZiYin4, Grace Keene2, Alvin Wei Ming Tan2, Steven Y. Feng2, Auddithio Nag2, Chengxu Zhuang5, Virginia A. Marchman2, Daniel LK Yamins2, Michael Frank2; 1University of California, San Diego, 2Stanford University, 3Amazon, 4Tsinghua University, Tsinghua University, 5Massachusetts Institute of Technology

Presenter: Bria Lorelle Long

Human children far exceed modern machine learning algorithms in their sample efficiency, achieving high performance in key domains with much less data than current models. This "data gap'' is a key challenge both for building intelligent artificial systems and for understanding human development. Egocentric video capturing children's experience—their "training data''—is a key ingredient for comparison of humans and models and for the development of algorithmic innovations to bridge this gap. Yet there are few such datasets available, and extant data are low-resolution, have limited metadata, and importantly, represent only a small set of children's experiences. Here, we provide the first release of a large developmental egocentric video dataset—the BabyView dataset—recorded using a high-resolution camera with a large vertical field-of-view and gyroscope/accelerometer data. This 868 hour dataset includes egocentric videos from children spanning 5 months to 3 years of age in longitudinal, at-home contexts. We provide gold-standard annotations for the evaluation of speech transcription, speaker diarization, and human pose estimation, and evaluate models in each of these domains. We train self-supervised language and vision models and evaluate their transfer to out-of-distribution tasks including syntactic structure learning, object recognition, depth estimation, and image segmentation. Although performance in each scales with dataset size, overall performance is relatively lower than when models are trained on curated datasets, especially in the visual domain. Our dataset stands as an open challenge for robust, human-like AI systems: how can such systems achieve human-levels of success on the same scale and distribution of training data as humans?

Topic Area: Visual Processing & Computational Vision

proceeding: Full Text on OpenReview