Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session C: Friday, August 15, 2:00 – 5:00 pm, de Brug & E‑Hall
Sensorimotor Affordances in a Global Latent Workspace
Nicolas Kuske1, Rufin VanRullen1; 1CNRS
Presenter: Nicolas Kuske
Understanding the role of embodiment in cognition is critical for advancing both neuroscience and artificial intelligence. While biological systems rely on multimodal sensorimotor interactions to ground meaning, artificial models often lack this grounding, limiting their ability to generalize across tasks and environments. In this work, we investigate the emergence of sensorimotor affordances within a Global Latent Workspace (GLW)—a multimodal deep learning architecture inspired by the Global Workspace Theory of consciousness. We train a reinforcement learning agent to perform a simulated embodied task (Obstacle Tower Challenge), and use its sensory-motor data to train a GLW multimodal representation (based on an encoder-decoder structure linked with each modality). We compare the GLW representation of images (from the agent's point of view) with the same image representations from a variational autoencoder. Our analysis reveals that the sensorimotor GLW compresses visual information into a structured motor latent manifold, naturally clustering affordance-relevant representations. Notably, these affordances enable zero-shot visual scene generation based on motor states, providing preliminary empirical support for sensorimotor theories of consciousness. By embedding affordances in a shared latent space, the GLW framework offers a biologically inspired path toward more generalizable and grounded artificial perception.
Topic Area: Predictive Processing & Cognitive Control
Extended Abstract: Full Text PDF