Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session B: Wednesday, August 13, 1:00 – 4:00 pm, de Brug & E‑Hall
Error Forcing in Recurrent Neural Networks
A Erdem Sağtekin1, Colin Bredenberg2, Cristina Savin3; 1Flatiron Institute, 2Mila- Quebec AI Institute, 3New York University
Presenter: A Erdem Sağtekin
How can feedback improve learning outcomes? Traditionally, feedback signals are thought to directly drive parameter (synaptic) changes. Yet, biophysically, the same signals also affect the activity of neurons. Here, we use this observation to develop a new algorithm termed error forcing (EF) for learning in recurrent neural networks, where feedback influences both synaptic plasticity and the network state. We geometrically contrast our approach with the established teacher forcing framework, and further provide an interpretation of its function from a Bayesian standpoint. EF learning outperforms traditional approaches in scenarios with temporally sparse feedback when the output is weakly constrained by the task. These benefits generalize across tasks and are maintained in a biologically-constrained approximation of error forcing.
Topic Area: Brain Networks & Neural Dynamics
Extended Abstract: Full Text PDF