Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall

From sequences to schemas: How recurrent neural networks learn temporal abstractions

Vezha Boboeva1, Alberto Pezzotta1, George Dimitriadis1, Athena Akrami1; 1University College London, University of London

Presenter: Vezha Boboeva

The world, despite its complexity, harbors patterns and regularities crucial for animals, with numerous real-life processes evolving over time into structured sequences of events. Brains have evolved to learn and exploit these sequential regularities, by forming knowledge at different degrees of abstraction: from simple transition and timing, to chunking, ordinal knowledge, algebraic patterns, and finally nested tree structures. How regularities expressed in algebraic patterns or abstract schemas (e.g., AAB or ABA) are encoded in the brain is still an open question. Here, we study whether and how neural circuits acquire, organize and use such an abstract code. We first build a computational framework to generate sequences with abstract temporal patterns. Next, we propose Recurrent Neural Networks (RNN) models performing different tasks requiring learning and predicting such sequences, and study the conditions under which learning is possible. We study the internal representations formed by the network models, and the extent to which these representations might be abstract, allowing to generalize to novel sequences and tasks.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF