“We develop new “smart” machine learning methods, in particular for approximate Bayesian inference, a powerful and principled way to extract information from data and process it based on probability theory. These methods are used for artificial intelligence and data science”, says
Secondly, they study Bayesian inference and decision making in human cognition, using machine learning algorithms both as tools to analyze human behavior and as models for what the brain might be doing.
“After all, the brain is constantly extracting and processing information from sensory data and memory, exactly as our methods”, says Acerbi.
Robust and sample-efficient probabilistic machine learning
One research area Acerbi is very excited about is robust and sample-efficient probabilistic machine learning.
”We develop methods to perform optimization and approximate Bayesian inference with complex scientific models, with applications in cognitive neuroscience and other fields. We are excited about how our work can extend and complement FCAI's research programs in agile probabilistic AI and simulator-based inference.”
To work with real models and data, they have developed algorithms that are robust to noise and sample-efficient, which means that analyses are much faster. This efficiency is particularly important when the models require complex computations or there are many datasets to analyze, or potentially in time-critical applications.
“In collaboration with Prof. Wei Ji Ma at New York University, we have developed a smart optimization method nicknamed
Variational Bayesian Monte Carlo
Acerbi´s lab has also recently applied the sample-efficient approach to Bayesian inference, by developing a new method called
“From a machine learning perspective, developing these algorithms poses a stimulating challenge as we need to build them with some ability to make simple intelligent choices themselves. It makes us think about the building blocks of intelligent adaptive behavior”, says Acerbi.
All these works have been published in top machine learning conferences, with recent extensions
“We believe that machine learning research should benefit the community”, says Acerbi.
Probabilistic human learning
Finally, Acerbi uses the same probabilistic machine learning tools as models for how the human brain may itself process information in a probabilistic way, the so-called Bayesian brain hypothesis.
“In a nutshell, a Bayesian observer uses probability theory to build beliefs about states of the world based on sensory observations and assumptions about the statistical structure of the environment”. In their work, Acerbi says, they “stress test” the Bayesian brain hypothesis
“For example, our work on how the brain