The Machine and Human Intelligence group focuses on probabilistic machine and human learning.

We are interested in smart probabilistic algorithms, as implemented by brains and machines, that are robust and sample-efficient — ready to be used "in the wild". We see resource constraints both as a practical necessity and as a useful lever to enforce intelligent behavior. Our research is roughly divided in two complementary goals that inform each other:

  1. We develop new "smart" machine learning methods, in particular for approximate Bayesian inference.
  2. We study human probabilistic inference and decision making.

This page is likely to be slightly out of date as we keep expanding our research in novel directions. We recommend to visit our Publications page for more detailed and up-to-date information about past and ongoing projects.

Sample-Efficient Probabilistic Machine Learning

We develop probabilistic machine learning methods to perform optimization and approximate Bayesian inference with complex scientific models, with applications largely in (but not limited to) computational and cognitive neuroscience. To work with real models and data, our algorithms are robust to noise (e.g., due to Monte Carlo approximations or simulations) and sample-efficient in that they require a relatively small number of function evaluations with respect to traditional methods. Our algorithms are released as well-documented toolboxes (see our Resources page).


  1. Robust and sample-efficient Bayesian inference with models with and without likelihood, via Variational Bayesian Monte Carlo (VBMC). VBMC is a new approach to Bayesian inference that obtains good approximations of the posterior and model evidence with a small number of likelihood evaluations (Acerbi, 2018; NeurIPS). The framework has been expanded in various directions (Acerbi, 2019; PMLR), notably with the addition of support for noisy log-likelihood evaluations, such as those estimated via simulation (Acerbi, 2020; NeurIPS). We have also worked on applying surrogate modeling and active learning (as in VBMC) to the "embarrassingly parallel" setting (de Souza, Mesquita, Kaski & Acerbi, 2022; AISTATS), and we are expanding the scalability of the method (Li, Clarté & Acerbi, 2023; arXiv).
  2. Fast hybrid Bayesian optimization for model fitting via Bayesian Adaptive Direct Search (BADS). BADS is a fast optimization algorithm that combines a model-free approach (mesh-adaptive direct search) with a strong model-based algorithm (Bayesian optimization), achieving the best of both worlds, that is sample-efficiency and robustness to noise (Acerbi & Ma, 2017; NeurIPS). BADS is currently used by dozens of computational labs across the world.
  3. Estimation of log-likelihoods for simulator-based models via inverse binomial sampling (IBS). IBS is an efficient statistical technique to estimate the log-likelihood when the log-likelihood is unavailable, but we can generate simulated data from the model. Unlike other "likelihood-free" methods, IBS does not use summary statistics, but computes the log-likelihood of the full data set (van Opheusden*, Acerbi* & Ma, PLoS Computational Biology, 2020). Moreover, IBS combines very well with BADS and VBMC.
Human Probabilistic Inference

We investigate whether and how people's perception and decision making follow principles of probabilistic inference (the Bayesian brain hypothesis). In a nutshell, a Bayesian observer builds beliefs about states of the world based on observations and assumptions (priors) about the statistical structure of the world. For example, a consequence of Bayesian behavior, empirically observed in many experiments, is that different pieces of sensory evidence are integrated according to their respective reliability. In our work, we "stress test" the Bayesian brain hypothesis until it breaks — to uncover details of approximate Bayesian inference in the brain. We also use the Bayesian observer framework as an x-ray machine that allows us to infer internal representations (e.g., prior beliefs) in the brain. We explore these questions with mathematical modelling and computational analysis of behavioral experiments.


  1. Approximate inference and deviations from Bayes-optimal behavior. We found deviations from Bayesian inference consistent with "noisy" representations of posterior distributions (Acerbi, Vijayakumar & Wolpert, 2014; PLoS Computational Biology). We also investigated potential deviations from probabilistic inference in multisensory perception, in the paradigm known as perceptual causal inference, with mixed results (Acerbi*, Dokka*, Angelaki & Ma, 2018; PLoS Computational Biology). Both studies are characterized by a thorough Bayesian factorial model comparison, necessary to compare multiple alternative hypotheses with subtle differences. In a purely theoretical study, we examined the flexibility of Bayesian models and our ability as researchers to uniquely identify different model components (Acerbi, Ma & Vijayakumar, 2014; NeurIPS).
  2. Internal representations of priors and probability. We studied how people update probability of events that change over time (Norton, Acerbi, Ma & Landy, 2019; PLoS Computational Biology), and the shape of internal representations of distributions of temporal intervals (Acerbi, Wolpert & Vijayakumar, 2012; PLoS Computational Biology).
  3. We look at the role of uncertainty, a basic signature of Bayesian inference, in different perceptual domains, such as in elementary perceptual organization (Zhou*, Acerbi* & Ma, 2020; PLoS Computational Biology) and in visual working memory (Yoo, Acerbi & Ma, 2021; Journal of Vision).
Intelligence Under Resource Constraints

Time, memory and computational power are typical resource constraints that are natural to both artificial and biological systems, but differ in their quantity and quality due to implementation details of brains and machines. We are interested in studying the effects of such constraints within established paradigms such as reinforcement learning. Our goal is both to develop novel algorithmic solutions and gain insights on the functioning of the brain.


  1. While computers can easily store numbers with high precision, biological systems are limited in their capacity to process and store information. As a starting point of our broader research agenda, we explored how agents could allocate limited memory resources dynamically within a reinforcement learning paradigm (Patel, Acerbi & Pouget, 2020; NeurIPS).