Our main goals are to
Most of our current main activities fall under the first three topics (Probabilistic inference, AI for ultrasonics, and Virtual Laboratories) with active ongoing projects, but we still work also on the other topics listed here.
Statistical machine learning provides tools for understanding complex data collections, using Bayesian inference to cope with uncertainty on model parameters originating from learning from finite data. We develop computationally efficient and maximally automatic approximate algorithms for Bayesian inference, in context of probabilistic programming and machine learning. Our goal is to allow the user to focus on model specification, not needing to worry about the specifics of inference.
We build machine learning and artificial intelligence tools for modeling ultrasound propagation in complex environments. We develop methods e.g. for inverse problems (detecting fouling or deformations), focusing ultrasound for cleaning, and acoustic levitation. The work is done is done together with the group of Ari Salmi and Edward Haeggström working on ultrasound physics, and we also collaborate with Altum Technologies that provides practical ultrasonic cleaning solutions.
The main activities are currently aiming for sustainable and safe cleaning of industrial production equipment, for reducing the environmental and economical harm of fouling that accumulates over time. We develop AI-enhanced sensing technologies for detecting and quantifying the fouling and for controlling the cleaning process so that the risk of damage is minimized.
Virtual Laboratories are a new perspective to scientific knowledge generation. Many of the elements in scientific discovery are general across scientific domains, and by isolating them from domain-specific elements (models, simulations, theories) we can develop AI techniques for assisting scientific discovery as well as industrial R&D more efficiently. Any research environment, for instance a natural science laboratory, can leverage on these techniques by framing their operations as a virtual laboratory.
Today it is easy to collect information about individuals by monitoring their activities, either based on explicit sensors or by log data collected by computers they interact with. We develop models required for inferring interesting and useful information based on such data, to describe, understand and enchance our daily life.
Machine learning research is often carried out in the context of elegant but simplified setups: It is assumed that all relevant data is provided in form a simple matrix or tensor. In most practical applications this is not the case, but instead we need to combine information scattered in multiple data sources of heterogeneous nature. We provide fundamental modeling solutions for combining such data sources.
Hyperspectral cameras capture the full spectrum of light, instead of jus the three channels of red, green and blue that mimic the limited vision of humans. Having access to this richer information makes most computer vision problems easier. The existing HS cameras are, however, expensive and large. We develop a low-cost alternative that uses AI to process images captured with a passive add-on device that can be attached to any camera, brining HS imaging to smartphones and DSLRs. We also work on hyperspectral image analysis.
Many modern machine learning models are complex and require large training data sets, which makes learning difficult in applications where labeling examples is costly or difficult. We study data-efficient techniques for learning complex models from limited supervision, by utilizing related learning tasks (multi-task learning, transfer learning) and unlabeled observations or external constraints (semi-supervised learning). We also develop solutions for changing enrivonments based on domain adaptation.