Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall

Tracking Time-Varying Syntax in Birdsong with a Sequential Autoencoder

Nadav Elami1, Yarden Cohen; 1Weizmann Institute of Science

Presenter: Nadav Elami

Songbirds are excellent models for studying sensorimotor sequence learning. Their songs are composed of vocal units, called syllables. The ordering of syllables in song is governed by syntax rules that determine syllable transition probabilities. We recently used regression analysis to show that canaries, a seasonal songbird, change transition probabilities across days and afford a new model for studying how the brain adapts syntax rules. But, regression analyses, which calculate transition probabilities in neighboring song batches, are noise-limited in small subsets of songs. Here, to overcome this limitation and study the dynamics of syntax rules in fine temporal resolution we develop a neural filtering approach that infers time-varying transition probabilities from birdsong sequences. Inspired by deep learning methods for analyzing neural spiking data, we designed an autoencoder that treats each song as an observation from a probabilistic syntax model whose parameters change between song bouts. We carried simulated experiments, modeling both simple Markov and second-order dependencies of transitions, and demonstrate that our method accurately tracks syntax changes. These findings underscore the potential of our approach to reveal the neural mechanisms underlying dynamic sensorimotor sequence generation.

Topic Area: Language & Communication

Extended Abstract: Full Text PDF