Research

Our research group focuses on uncovering complex information from bioimages using machine learning. We are interested in developing novel deep learning solutions for bioimage analysis, study various learning approaches to create general models, and to apply these methods and models to profile cancer cell and tissue samples imaged using fluorescence microscopy.
Unbiased multiplexed cancer tissue representation learning

We study self-supervised and unsupervised learning to uncover biologically and therapeutically meaningful information from multiplexed microscopy imaging data of cancer tissue samples. Using self-supervised learning we are able to extract meaningful representations from image data without the need for annotations as required by supervised learning approaches. In addition, self-supervised learning can lead to more generally applicable representations as the model is not optimized (and biased) to solve a specific task.

We apply these methods to explore various solid tumor samples imaged with cyclic multiplexed immunofluorescence approach developed earlier by our collaborator (Blom et al., 2017). Together with Dr. Pellinen, we are studying these samples in single-cell and tissue compartment levels to discover new predictive biomarkers separately for each tumor type, and to compare the findings to other tumors in a pan-cancer study.

Self-supervised learning enables unbiased patient characterization from multiplexed cancer tissue microscopy images

 

Self-supervised representation learning and generative AI for image-based profiling

We study self-supervised learning and generative models to profile single-cells under various perturbations. We are studying these methods to extract finest details in large-scale datasets and to correct experimental effects, such as batch effects, harming to extract meaningful information from the data.

We combine representation learning and generative methods to study counterfactuals. We are specifically interested in enabling prediction of untested conditions (ie. virtual lab) to decrease the search space required for experimental testing in lab.

Analysis of cellular phenotypes with unbiased image-based generative models

High-throughput 3D spheroid profiling of cancer-stroma interactions

Biological structures are inherently three-dimensional, yet they are often studied using simplified 2D models. In this project, we combine advanced 3D imaging, innovative sample preparation, and machine learning to extract deeper, spatially resolved insights from 3D cell cultures, with a focus on cancer-stroma interactions and drug responses. Our pipeline integrates live-cell fluorescent staining, deep learning-based segmentation, and Bayesian optimization to analyze single cells within dense spheroids, capturing subtle phenotypic changes that are often missed by conventional assays.

This work is a collaboration between the Bioimage Profiling group and Adj. Prof. Vilja Pietiäinen (Functional Precision Medicine of Pediatric Solid Tumors, Kallioniemi Research Group). Pietiäinen’s team provides the biological samples and high-content wet-lab expertise, including co-culture spheroid assays and drug screening, while our group develops the computational pipelines, focusing on AI-driven 3D image analysis, feature extraction, and phenotypic profiling. Together, we bridge experimental and computational biology to enable high-throughput analysis of complex tumor microenvironments.

Our recent bioRxiv preprint demonstrates the pipeline in renal cancer and immune cell co-cultures. We are now extending the framework to include the Cell Painting protocol in 3D, aiming to build rich morphological profiles from more complex and clinically relevant samples. The ultimate goal is to apply this approach to patient-derived samples, advancing toward predictive, personalized cancer models.

Related publications:

Project list