We study self-supervised and unsupervised learning to uncover biologically and therapeutically meaningful information from multiplexed microscopy imaging data of cancer tissue samples. Using self-supervised learning we are able to extract representations from image data without the need for annotations as required by supervised learning approaches. In addition, self-supervised learning can lead to more generally applicable representations as the model is not optimized (and biased) to solve a specific task.
We apply these methods to explore various solid tumor samples imaged with cyclic multiplexed immunofluorescence approach developed earlier by our collaborator (Blom et al., 2017). Together with Dr. Pellinen, we are studying these samples in single-cell and tissue compartment levels to discover new predictive biomarkers separately for each tumor type, and to compare the findings to other tumors in a pan-cancer study.
We study self-supervised learning and generative models to profile single-cells under various perturbations. We are studying these methods to extract finest details in large-scale datasets and to correct experimental effects, such as batch effects, harming to extract meaningful information from the data.
We combine representation learning and generative methods to study counterfactuals. We are specifically interested in enabling prediction of untested conditions (ie. virtual lab) to decrease the search space required for experimental testing in lab.
Analysis of cellular phenotypes with unbiased image-based generative models
Biological structures are 3D though often studied using simplified practically 2D samples and images. We are studying how machine learning approaches can extract further information available in 3D cell cultures and tissue samples. These studies include experiments in sample preparation and imaging as well as in machine learning method development for 3D imaging data.