Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Contributed Talk Session: Thursday, August 14, 10:00 – 11:00 am, Room C1.03
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Connectome-Constrained Unsupervised Learning Reveals Emergent Visual Representations in the Drosophila Optic Lobe

Keisuke Toyoda1, Naoya Nishiura2, Rintaro Kai1, Masataka Watanabe; 1The University of Tokyo, Tokyo Institute of Technology, 2The University of Tokyo

Presenter: Keisuke Toyoda

Understanding how brain structure enables visual processing is crucial. While \textit{Drosophila} offers a complete connectome, computational models often use biologically implausible supervised signals. We address this by building a large-scale autoencoder constrained by the complete \textit{Drosophila} right optic lobe connectome ($\sim$45k neurons, FlyWire dataset). Using photoreceptors (R1-R6) as both input and output, the model incorporates anatomical feedforward and feedback loops and was trained unsupervised on naturalistic video stimuli to minimize reconstruction error. Temporal offsets were included to probe predictive capacity. The autoencoder accurately reconstructed photoreceptor inputs with high fidelity. Deeper layer neurons (medulla, lobula) showed moderate, stable activity under sustained input, consistent with efficient engagement and functional recurrent loops. Temporal offsets improved short-term prediction, indicating learned input dynamics. We demonstrate that a connectome-based autoencoder can learn meaningful visual representations via biologically plausible unsupervised learning. This highlights how anatomical structure shapes emergent function and provides a digital twin framework for studying visual processing beyond task-specific supervised approaches, suggesting complex representations can arise from self-organization on detailed neural circuits.

Topic Area: Visual Processing & Computational Vision

Extended Abstract: Full Text PDF