Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
Meta-Reinforcement Learning in Homeostatic Regulation
Naoto Yoshida1; 1Kyoto University
Presenter: Naoto Yoshida
The homeostasis of internal bodily states is essential for animal survival. In computational neuroscience, Homeostatically Regulated Reinforcement Learning (HRRL) has been proposed as a theoretical framework for modeling the learning of behavior in agents that maintain homeostasis through trial and error. HRRL assumes the existence of the dynamics within the agent and defines rewards based on its internal state. However, it remains unclear what kinds of behavioral learning are enabled by such internally defined rewards. In this study, we hypothesized that when dealing with such internally defined rewards, agents can acquire meta-reinforcement learning (meta-RL) capabilities by incorporating multimodal inputs and recurrent connections into the policy network architecture. Numerical experiments suggested that the proposed architecture enable the HRRL agent to acquire exploratory behaviors in the environment, indicating that meta-learning abilities comparable to those found in previously known meta-RL approaches can be achieved using different architectures.
Topic Area: Reward, Value & Social Decision Making
Extended Abstract: Full Text PDF