Anton Björklund defends his PhD thesis on Interpretable and explainable machine learning for natural sciences

On the 31st of May 2024, M.Sc. Anton Björklund defends his PhD thesis on Interpretable and explainable machine learning for natural sciences. The thesis is related to research done in the Department of Computer Science and the Exploratory Data Analysis group.

M.Sc. Anton Björklund defends his doctoral thesis "Interpretable and explainable machine learning for natural sciences" on Friday the 31st of May 2024 at 13 o'clock in the University of Helsinki Exactum building, Auditorium B123 (Pietari Kalmin katu 5, 1st floor). His opponent is Professor BenoÎt Frénay (University of Namur, Belgium) and custos Professor Kai Puolamäki (University of Helsinki). The defence will be held in English.

The thesis of Anton Björklund is a part of research done in the Department of Computer Science and in the Exploratory Data Analysis group at the University of Helsinki. His supervisor has been Professor Kai Puolamäki (University of Helsinki).

Interpretable and explainable machine learning for natural sciences

Machine learning and artificial intelligence are becoming fundamental parts of the modern world. Characteristic for modern machine learning is the use of large and complex models. We call these kinds of models black box models because their internal reasoning is practically impossible to follow. However, understanding the processes and decisions is important when human lives are affected and for the scientific discovery of new knowledge.

The topic of this dissertation is machine learning where human understanding is desired, or required. We will discuss two approaches to enable understanding; interpretable machine learning and explainable artificial intelligence. Interpretable machine learning involves the use transparent processes and models that are directly understandable. These kinds of methods let us verify that they work correctly and, possibly, extract new knowledge.

The use of black box models is often motivated by their higher accuracy. If we want to use black box models, we have turn to explainable artificial intelligence. Here, the goal is not to replace the complex models, but rather extract more information from them, to better understand their reasoning.

In this dissertation we take a closer look at two types of interpretable machine learning methods: how robust regression deals with unreliable data and how non-negative matrix factorisation decomposes the data into recognisable parts. As for explainable artificial intelligence, we look at model-agnostic, local explanation methods, that can explain individual predictions from any black box model, and how we can visualise multiple local explanation to get a holistic view of the behaviour of a black box model.

Finally, the increased use of machine learning also extends to scientific applications. The main objective of science is the pursuit of knowledge, which makes interpretability and explanations crucial tools when applying machine learning. An advantage of scientific domains is the vast amount of background knowledge, that can be used to inform and improve the modelling. This dissertation demonstrates applications of interpretable machine learning and explainable artificial intelligence on two scientific domains: high energy physics and atmospheric science.

Avail­ab­il­ity of the dis­ser­ta­tion

An electronic version of the doctoral dissertation will be available in the University of Helsinki open repository Helda at http://urn.fi/URN:ISBN:978-952-84-0144-5.

Printed copies will be available on request from Anton Björklund: anton.bjorklund@helsinki.fi