Publications

Our research is regularly published in top-tier machine learning conferences, such as NeurIPS, and in renowned computational journals, such as PLoS Computational Biology. See below for a detailed list of our publications.

You can find Luigi Acerbi’s Google Scholar Citations page here.

Preprints

  • Yoo AH, Acerbi L, Ma WJ (2020)
    Uncertainty is Maintained and Used in Working Memory
    biorXiv
    Link | Code (GitHub)

    What are the contents of working memory? In both behavioral and neural computational models, the working memory representation of a stimulus is typically described by a single number, namely a point estimate of that stimulus. Here, we asked if people also maintain the uncertainty associated with a memory, and if people use this uncertainty in subsequent decisions.

In press / 2020

  • van Opheusden B*, Acerbi L*, Ma WJ (2020)
    Unbiased and Efficient Log-Likelihood Estimation with Inverse Binomial Sampling
    To appear in PLoS Computational Biology (* equal contribution)
    Link | Code (GitHub) | Tweeprint
    Many models (not limited to psychology and neuroscience) do not have a log-likelihood in closed form, but we can easily sample observations from the model (i.e., via simulation). Inverse binomial sampling (IBS) is a technique to estimate log-likelihood via sampling in an efficient and unbiased way that, unlike similar methods, does not use summary statistics but the full data. IBS enables likelihood-based inference for models without accessible likelihoods!
  • Acerbi L (2020)
    Variational Bayesian Monte Carlo with Noisy Likelihoods
    In Proc. Advances in Neural Information Processing Systems 33 (NeurIPS '20), Montréal, Canada.
    Link | arXivCode (GitHub) | Tweeprint
    We extend Variational Bayesian Monte Carlo (VBMC) to perform sample-efficient Bayesian posterior and model inference also with noisy likelihood evaluations, such as those obtained via simulation (i.e., sampling). We tested VBMC with many models and real data from computational and cognitive neuroscience, up to D = 9 parameters. The new versions of VBMC vastly outperform previous methods (including older versions of VBMC), and inference is still quite fast thanks to the combo of variational inference + Bayesian quadrature.
  • Patel N, Acerbi L, Pouget A (2020)
    Dynamic allocation of limited memory resources in reinforcement learning
    In Proc. Advances in Neural Information Processing Systems 33 (NeurIPS '20), Montréal, Canada.
    Link | arXivCode (GitHub)
    In this work we propose a dynamical framework to maximize expected reward under constraints of limited memory, such as those experienced by biological brains. We derive from first principles an algorithm, Dynamic Resource Allocator, which we apply to standard tasks in reinforcement learning and a model-based planning task, and find that it allocates more resources to items in memory that have a higher impact on cumulative rewards. This work provides a normative solution to the problem of online learning of how to allocate costly resources to a collection of uncertain memories.
  • Zhou Y*, Acerbi L*, Ma WJ (2020)
    The Role of Sensory Uncertainty in Simple Contour Integration
    To appear in PLoS Computational Biology. (*equal contribution)
    Link | Data and code (GitHub)
    Our percept of the world is governed not only by the sensory information we have access to, but also by the way we interpret this information as part of a whole ("perceptual organization"). This study examines whether and how people incorporate uncertainty into perceptual organization, by varying sensory uncertainty from trial to trial in a contour integration task, an elementary form of perceptual organization. We found that people indeed take into account sensory uncertainty, however in a way that subtly deviates from optimal behavior.

2019

  • Norton EH, Acerbi L, Ma WJ, Landy MS (2019)
    Human online adaptation to changes in prior probability
    PLoS Computational Biology 15(7): e1006681. DOI: 10.1371/journal.pcbi.1006681
    Link | Data and code (GitHub)

    How do people learn and adapt to changes in the probability of events? We addressed this question with two psychophysical tasks that involved categorization of visual stimuli, where the probability of the categories jumped over the course of the experiment. Using Bayesian model comparison and a handful of different observer models, we found that human data are explained best by a model that estimates category probability based on recently observed exemplars, with a bias towards equal probability. Interestingly, one of the tasks is virtually the same as the mouse decision-making task used by the International Brain Laboratory.

  • Acerbi L (2019)
    An Exploration of Acquisition and Mean Functions in Variational Bayesian Monte Carlo
    In Proc. Machine Learning Research 96: 1-10. 1st Symposium on Advances in Approximate Bayesian Inference, Montréal, Canada 
    Link
    In this paper, we tried to find better acquisition functions or mean functions for VBMC, but we did not manage to find anything that would work significantly better than what is in the 2018 VBMC paper. In a positive light, this work shows that our original choices for VBMC were pretty good.

2018

  • Acerbi L (2018)
    Variational Bayesian Monte Carlo
    In Proc. Advances in Neural Information Processing Systems 31 (NeurIPS '18), Montréal, Canada
    Link | arXiv | Code (GitHub) | Tweeprint
    This paper introduced VBMC, a novel machine learning method to perform Bayesian posterior and model inference when the model likelihood is moderately expensive to evaluate. Take Bayesian optimization, but instead of computing only a point estimate, VBMC returns a full approximate posterior and a lower bound on the log model evidence (the ELBO), useful for model comparison. VBMC combines variational inference and active-sampling Bayesian quadrature, and vastly improves over previous seminal Bayesian quadrature methods. We also released VBMC as an user-friendly MATLAB toolbox.
  • Acerbi L*, Dokka K*, Angelaki DE, Ma WJ (2018)
    Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception
    PLoS Computational Biology 14(7): e1006110. DOI: 10.1371/journal.pcbi.1006110 (*equal contribution)
    LinkData and code (GitHub)
    How do people combine information from vision and from their vestibular sense to know in which direction they are moving? We originally thought that psychophysical data collected by our collaborators at Baylor College of Medicine would be able to strongly distinguish competing models of multisensory perception. However, behavioral models have many plausible tweaks (e.g., observer assumptions, heteroskedasticity, decision noise), and when we allowed for those, the models became less distinguishable. Thus, the story became more methodological: how to comprehensively compare models of multisensory perception in a robust, principled way.

2017

  • Acerbi L, Ma WJ (2017)
    Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search
    In Proc. Advances in Neural Information Processing Systems 30 (NeurIPS '17), Long Beach, USA
    Link | arXiv (article + supplement) | Code (GitHub)
    This paper introduced BADS, a novel optimization method to peform fast and robust hybrid Bayesian optimization. BADS works with both noiseless and noisy objective functions, and it outperforms many optimizers on real model fitting problems. BADS is currently used in dozens of computational labs across the world and is available as an user-friendly MATLAB toolbox.
  • Acerbi L, Vijayakumar S, Wolpert DM (2017)
    Target Uncertainty Mediates Sensorimotor Error Correction
    PLoS ONE 12(1): e0170466
    Link | PDF (article + supplement) | Data
    The ability to correct for errors that arise from unreliable perceptions and motor commands is essential to human dexterity. In this paper, we examined how participants correct for movement errors in a naturalistic task. Even though participants had ample time to compensate for experimentally-induced perturbations, their amount of correction was affected by uncertainty about the target location. In fact, our analyses suggest that participants were optimally lazy, limiting their effort to just as much as needed so as not to significantly affect their overall performance in the task, consistent with theories of stochastic optimal control.

2014

  • Acerbi L, Ma WJ, Vijayakumar S (2014)
    A Framework for Testing Identifiability of Bayesian Models of Perception
    In Proc. Advances in Neural Information Processing Systems (NeurIPS '14), Montreal, Canada
    Link | Paper (PDF) | Appendix (PDF)
    Bayesian observer models are very effective at describing human performance in perceptual tasks, so much so that they are trusted to faithfully recover hidden mental representations from the data. However, the intrinsic degeneracy of the Bayesian framework, as multiple combinations of elements can yield empirically indistinguishable results, prompts the question of model identifiability. In this work, we proposed a novel framework for a systematic testing of the identifiability of a significant class of Bayesian observer models, with practical applications for improving experimental design.
     
  • Acerbi L, Vijayakumar S, Wolpert DM (2014)
    On the Origins of Suboptimality in Human Probabilistic Inference
    PLoS Computational Biology 10(6): e1003661
    Link | PDF (article + supplement) | Data
    The process of decision making involves combining sensory information with statistics collected from prior experience. In this study, we used a novel experimental setup to examine the role of complexity of prior experience on suboptimal decision making. Participants' performance in our task, which did not require subjects to remember past events, was mostly unaffected by the complexity of the prior distributions, suggesting that remembering the patterns of past events constitutes more of a challenge to decision making than manipulating the complex probabilistic information. We introduced a mathematical description that captures the pattern of human responses in our task better than previous accounts.

Before 2014

  • Acerbi L, Dennunzio A, Formenti E (2013)
    Surjective multidimensional cellular automata are non-wandering: A combinatorial proof
    Information Processing Letters 113(5-6): 156-159
    Link
  • Acerbi L, Wolpert DM, Vijayakumar S (2012)
    Internal Representations of Temporal Statistics and Feedback Calibrate Motor-Sensory Interval Timing
    PLoS Computational Biology 8(11): e1002771
    Link | PDF (article + supplement) | Data
  • Acerbi L, Dennunzio A, Formenti E (2009)
    Conservation of Some Dynamical Properties of Operations on Cellular Automata
    Theoretical Computer Science, 410(38-40): 3685-3693
    Link
  • Acerbi L, Dennunzio A, Formenti E (2007)
    Shifting and Lifting of Cellular Automata
    In Proc. Third Conference on Computability in Europe, CiE 2007: 1-10; June 2007
    Link

Thesis

  • Acerbi L (2015)
    Complex internal representations in sensorimotor decision making: a Bayesian investigation
    Doctoral dissertation, The University of Edinburgh. Advisors: Prof. Sethu Vijayakumar, Prof. Daniel M. Wolpert
    Link | PDF

 

Disclaimer: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.