Ryan Smith
Lilian Weber, Peter Thestrup Waade, Andreea Diaconescu, Ryan Smith
Current models of motivation and cognitive control have relied to a large extent on the Reinforcement Learning framework. While this approach has enjoyed considerable success, other frameworks may offer complementary insights. In this symposium, we will explore whether the Active Inference framework could be helpful in this regard. Active Inference is a more recently proposed approach for building cognitive computational models of perception, learning, and decision-making. While Reinforcement Learning approaches have tended to leverage fully observable Markov Decision Processes (MDPs), in which states are known with certainty, Active Inference has instead focused on partially observable MDPs (POMDPs). The latter calls for Bayesian strategies that represent uncertainty over hidden states and infer states and parameters from observations. In many common task settings, resolving uncertainty over states or parameters in a model requires an active approach to optimize inference; in this case, a value function motivating choices that seek out the most informative observations. Based on a tractable form of approximate Bayesian inference that minimizes a complexity-weighted prediction error—specifically, variational free energy in perception and its expected future value in action selection—we will describe how the Active Inference framework offers one such value function. This value function formalizes rewarding outcomes as less “surprising” under a model, motivating decisions that strike a balance between maximizing reward and minimizing uncertainty through exploration. This notion of value is grounded in the biological drive of an organism to maintain itself within a small range of states required for continuous survival, linking reward to interoception and homeostasis, and linking planning to visceral regulation and allostasis. This framework therefore allows motivation and cognitive control processes to be modeled in a number of different ways and may offer novel and testable hypotheses.
In the first talk, Dr. Lilian Weber will motivate the need for a framework such as Active Inference, based on a synthesis of empirical work demonstrating how reward is grounded in maintaining physiological variables within homeostatic bounds. Dr. Peter Thestrup Waade will then formally introduce Active Inference and present simulations comparing it with Reinforcement Learning approaches in modeling motivation and cognitive control tasks. Dr. Andreea Diaconescu will then present empirical work demonstrating how Reinforcement Learning and Active Inference models each account for suicidal motivation in psychiatric patient samples. Finally, Dr. Ryan Smith (chair) will present empirical work using Active Inference to model explore-exploit (multi-armed bandit) task data in healthy individuals and clinical groups known to show alterations in motivation and cognitive control—namely, individuals with depression, anxiety, and substance use disorders. He will also compare similarities and differences in how Active Inference and Reinforcement Learning account for individual differences. General discussion will then follow, allowing exploration of broader issues and possible future research directions that could incorporate this approach.