Hermanni Hälvä defends his PhD thesis on Nonlinear Independent Component Analysis with Structured Priors

On the 23rd of August 2024, M.Phil. Hermanni Hälvä defends his PhD thesis on Nonlinear Independent Component Analysis with Structured Priors. The thesis is related to research done in the Department of Computer Science and the Neuroinformatics group.

M.Phil. Hermanni Hälvä defends his doctoral thesis "Nonlinear Independent Component Analysis with Structured Priors" on Friday the 23rd of August 2024 at 13 o'clock in the University of Helsinki Exactum building, Auditorium CK112 (Pietari Kalmin katu 5, basement). His opponent is Professor Matthias Bethge (University of Tübingen, Germany) and custos Professor Aapo Hyvärinen (University of Helsinki). The defence will be held in English.

The thesis of Hermanni Hälvä is a part of research done in the Department of Computer Science and in the Neuroinformatics group at the University of Helsinki. His supervisor has been Professor Aapo Hyvärinen (University of Helsinki).

Nonlinear Independent Component Analysis with Structured Priors

Our world is filled with complex, high-dimensional, noisy, unlabeled data. The goal of unsupervised feature learning is to learn semantically meaningful, low-dimensional and useful representation from such data. Nonlinear independent component analysis provides a natural, principled, solution to this task. Despite its theoretical appeal to feature learning, the most standard version of nonlinear ICA is known to be catastrophically unidentifiable -- it is impossible to find the ground-truth data generating nonlinear function because there are infinite number of equally good candidates. This thesis explores how identifiability in nonlinear ICA can be established by exploiting latent structure in the data generating process. In addition to advancing identifiability theory of nonlinear ICA, we also present several practical estimation algorithms.

Many previous nonlinear ICA models attain identifiability by assuming access to an observable auxiliary data that captures further information about the structure of the data generating process. We show that, in contrast to these works, identifiability of nonlinear ICA is possible with appropriate latent structures. We first establish this in time-series context with a hidden-Markov nonlinear ICA model, where the latent state captures discrete switching behaviour, a type of non-stationarity. Then, we expand this work beyond time-series, and introduce our structured nonlinear ICA (SNICA) model class. We prove that as long as latent dependencies are sufficiently strong and non-Gaussian, any model that falls in this model class is identifiable. In contrast to the mainly heuristic methods of previous works, we take a principled probabilistic approach throughout.

Using the SNICA framework, we introduce two new useful models and related estimation algorithms. The first one, Delta-SNICA, is the first nonlinear ICA model for time-series that accounts for both nonstationarity and autocorrelation in a fully unsupervised setting. We illustrate the model's capabilities on real MEG data. The SNICA framework is not limited to time-series data, but also applies to multi-dimensional dependency structures. In fact, we introduce another model, tp-NICA, which is the first nonlinear ICA model that is able to exploit spatio-temporal dependencies by utilizing a t-process latent space. Its usefulness is exhibited in the real world task of remote monitoring agricultural crops from satellite data.

Avail­ab­il­ity of the dis­ser­ta­tion

An electronic version of the doctoral dissertation will be available in the University of Helsinki open repository Helda at http://urn.fi/URN:ISBN:978-952-84-0160-5.

Printed copies will be available on request from Hermanni Hälvä: hermanni.halva@helsinki.fi