We are interested in smart probabilistic algorithms, as implemented by brains and machines, that are robust and sample-efficient — ready to be used "in the wild". We see resource constraints both as a practical necessity and as a useful lever to enforce intelligent behavior. Our research is roughly divided in two complementary goals that inform each other:
This page is likely to be slightly out of date as we keep expanding our research in novel directions. We recommend to visit our Publications page for more detailed and up-to-date information about past and ongoing projects.
We develop probabilistic machine learning methods to perform optimization and approximate Bayesian inference with complex scientific models, with applications largely in (but not limited to) computational and cognitive neuroscience. To work with real models and data, our algorithms are robust to noise (e.g., due to Monte Carlo approximations or simulations) and sample-efficient in that they require a relatively small number of function evaluations with respect to traditional methods. Our algorithms are released as well-documented toolboxes (see our Resources page).
We investigate whether and how people's perception and decision making follow principles of probabilistic inference (the Bayesian brain hypothesis). In a nutshell, a Bayesian observer builds beliefs about states of the world based on observations and assumptions (priors) about the statistical structure of the world. For example, a consequence of Bayesian behavior, empirically observed in many experiments, is that different pieces of sensory evidence are integrated according to their respective reliability. In our work, we "stress test" the Bayesian brain hypothesis until it breaks — to uncover details of approximate Bayesian inference in the brain. We also use the Bayesian observer framework as an x-ray machine that allows us to infer internal representations (e.g., prior beliefs) in the brain. We explore these questions with mathematical modelling and computational analysis of behavioral experiments.
Time, memory and computational power are typical resource constraints that are natural to both artificial and biological systems, but differ in their quantity and quality due to implementation details of brains and machines. We are interested in studying the effects of such constraints within established paradigms such as reinforcement learning. Our goal is both to develop novel algorithmic solutions and gain insights on the functioning of the brain.