Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers

Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall

Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models

Krishn Bera1, Alexander Fengler1, Michael Frank1; 1Brown University

Presenter: Alexander Fengler

In cognitive neuroscience, there has been growing interest in adopting sequential sampling models (SSM) as the choice function for reinforcement learning (RLSSM), opening up new avenues for exploring generative processes that can jointly account for decision dynamics within and across trials. To date, such approaches have been limited by computational tractability, e.g., due to lack of closed-form likelihoods for the decision process and expensive trial-by-trial evaluation of complex reinforcement learning (RL) processes. We enable hierarchical Bayesian parameter estimation for a broad class of RLSSM models, using Likelihood Approximation Networks (LANs) in conjunction with differentiable RL likelihoods to leverage fast gradient-based inference methods including Hamiltonian Monte Carlo or Variational Inference (VI). By exploiting the differentiability of RL likelihoods, this method improves scalability and enables faster convergence for complex combinations of RL and decision processes. To showcase these methodological advantages, we consider multiple interacting generative learning processes with the Reinforcement Learning - Working Memory (RLWM) task and model. This RLWM model is then combined with SSMs via LANs. When combined with hierarchical variational inference, this approach can accurately recover the posterior parameter distributions in complex RLSSM paradigms, and moreover, that in comparison, fitting data with the equivalent choice only RLWM model yields a biased estimator of the true generative process.

Topic Area: Reward, Value & Social Decision Making

Extended Abstract: Full Text PDF