“We develop new “smart” machine learning methods, in particular for approximate Bayesian inference, a powerful and principled way to extract information from data and process it based on probability theory. These methods are used for artificial intelligence and data science”, says Luigi Acerbi, a new assistant professor at the Department of Computer Science at the University of Helsinki, and a new member of FCAI.
Secondly, they study Bayesian inference and decision making in human cognition, using machine learning algorithms both as tools to analyze human behavior and as models for what the brain might be doing.
“After all, the brain is constantly extracting and processing information from sensory data and memory, exactly as our methods”, says Acerbi.
Robust and sample-efficient probabilistic machine learning
One research area Acerbi is very excited about is robust and sample-efficient probabilistic machine learning.
”We develop methods to perform optimization and approximate Bayesian inference with complex scientific models, with applications in cognitive neuroscience and other fields. We are excited about how our work can extend and complement FCAI's research programs in agile probabilistic AI and simulator-based inference.”
To work with real models and data, they have developed algorithms that are robust to noise and sample-efficient, which means that analyses are much faster. This efficiency is particularly important when the models require complex computations or there are many datasets to analyze, or potentially in time-critical applications.
“In collaboration with Prof. Wei Ji Ma at New York University, we have developed a smart optimization method nicknamed BADS (formally, Bayesian Adaptive Direct Search), which has proven to be much more efficient than other common optimization algorithms and is currently used in dozens of computational labs across the world,” says Acerbi.
Variational Bayesian Monte Carlo
Acerbi´s lab has also recently applied the sample-efficient approach to Bayesian inference, by developing a new method called Variational Bayesian Monte Carlo.
“From a machine learning perspective, developing these algorithms poses a stimulating challenge as we need to build them with some ability to make simple intelligent choices themselves. It makes us think about the building blocks of intelligent adaptive behavior”, says Acerbi.
All these works have been published in top machine learning conferences, with recent extensions to be presented at the NeurIPS machine learning conference in December 2020, and are available as easy-to-use open-source toolboxes.
“We believe that machine learning research should benefit the community”, says Acerbi.
Probabilistic human learning
Finally, Acerbi uses the same probabilistic machine learning tools as models for how the human brain may itself process information in a probabilistic way, the so-called Bayesian brain hypothesis.
“In a nutshell, a Bayesian observer uses probability theory to build beliefs about states of the world based on sensory observations and assumptions about the statistical structure of the environment”. In their work, Acerbi says, they “stress test” the Bayesian brain hypothesis until it breaks, to uncover details of how the brain gathers and processes information.
“For example, our work on how the brain combines information from different sensory modalities has potential applications to neuroprosthetics and improving virtual reality interfaces”. The group's research on human decision making can inform FCAI's research program on Interactive AI, to infer human beliefs and goals.