A team of four mathematicians from Berlin achieved the highest quality in deblurring. The second place went to Singapore and the third one to Holland. All the best three teams relied on machine learning in their solutions. Teams from the University of Eastern Finland and the University of Helsinki also participated, though the team from Helsinki knew in advance that their solution could not win according to the competition rules, since the competition was designed at the Department of Mathematics at the University of Helsinki. A large number of participants registered for the competition, and the data have been downloaded over 1,000 times already. Solutions were returned by 15 groups.
The results were judged by how many letters of random text can a computer identify correctly from the sharpened photos. The algorithm was also expected to produce reasonable results when applied to test images with other content than text; this was to rule out methods that fill in characters regardless of the input image.
"Amazing results were achieved with machine learning. The methods of inversion mathematics barely went half-way in our competition material, where the images became blurrier and blurrier at every step, 20 steps in all. Applying machine learning, the blurrier images were deblurred so that 70 per cent of the letters became legible", says Professor Samuli Siltanen from the Department of Mathematics at the University of Helsinki.
The competitors were allowed to study in advance the technology with which the data had been produced. It used two cameras, and that was why there was one sharp and one blurry version of each image.
"Usually, algorithms are tested with simulated data, but here we had real data and it offered us an interesting starting point", says Markus Juvonen, who is preparing a PhD thesis on the topic.
The competitors were expected to publish the result and document the method they had used. All the algorithms developed by the contestants are now public and have been tested by e.g. computer science professor Teemu Roos, who commented them on Twitter.
The contest was organised by the Finnish inverse problems society (FIPS), headed by Professor Samuli Siltanen. Besides Siltanen and Juvonen, the contest organising committee included Post-doctoral Researcher Fernando Moura from the Inversion mathematics group at the University of Helsinki.
Martin Genzel, Jan Macdonald and Maximilian März are independent researchers who are currently focusing on deep learning for high-dimensional image reconstruction tasks. Theophil Trippe is writing his master thesis based on this challenge (shout-out to Prof. Gabriele Steidl who made this project possible!). They all have the same educational background from Technische Universität Berlin. The submission of the TU Berlin Team.
Why did you join this challenge?
"We participated in this challenge because we were fascinated by its task and the associated data set. Martin, Jan, and Max had already won this year's AAPM Grand Challenge on Deep Learning for Computed Tomography. Our goal was to bring the momentum with us for sharpening blurry images, and we were very happy that Theo joined us to become the key driver of our team!"
Did you learn something new? Does this challenge give you any new ideas for your future work?
"Foremost, we were surprised by how far we could push our learning algorithm. Sharpening the blurry images at the most difficult levels of the challenge seemed pretty much impossible at the beginning and we were quite thrilled later on, when realizing that it is possible to crack them!
This result opens up various directions for future research: Technically speaking, it shows that the artificial neural network goes beyond a pure 'regularization' of the inverse problem. It also acts as a 'generator' of features that are inferred from the underlying data distribution. The second aspect makes it possible to reconstruct images from a very limited amount of physical information (as in last level of the challenge). Our long term vision is to work out a rigorous mathematical taxonomy of the relationship between these two perspectives."
How did you solve the problem? Winner recipe?
"Our main insight is that a solid understanding of the so-called 'forward operator' (meaning the physics behind the out-of-focus blur) is crucial. We first extracted a mathematical model for the out-of-focus blur from the training data and then fed it into the artificial neural network. This appears to be a classical, yet very powerful strategy that applies to many other imaging problems as well."
Your comment of the assignment?
"To our view, the challenge was incredibly well designed! In particular, we liked that the task focused on a very specific data distribution: sharpening blurry letters. This also enabled a clear performance-evaluation by counting how many letters were successfully deciphered – a very objective measure when compared to other image-quality measures such as the signal-to-noise ratio."
"We are very grateful to Prof. Samuli Siltanen and his team for setting up this challenge! In view of the current replication crisis, we believe that challenges like this one are crucial to filter out 'hot air' from components that actually work. Additionally, it offers a great opportunity to young researchers like us, who have limited access to the in-crowd of the research community. We hope to see more of it in the future!"