The driving question is how to extend the thorough theory of an Inverse Problem to an algorithm, which can be implemented and ultimately produces a reconstruction of the unknown object. Furthermore, if one deals with real life measurements or even just simulations on a computer, the discretization to a finite dimensional setting is an important issue. In particular, how does the problem at hand behave if we have only finite, possibly noisy, data available?
In X-ray tomography or computerized tomography (CT) one has available projection images of a physical body taken from different directions. In general a three-dimensional body is measured from an outer curve, where the goal is to recover a two-dimensional slice from the collected data. Mathematically, the exact measurement is a collection of line integrals of the non-negative attenuation coefficient function along the paths of the X-rays. In CT one uses X-rays to measure the attenuation of photons inside a body and within a specific slice. The resulting data can be obtained by the Radon transform, a linear and continuous operator mapping the attenuation function to its line integrals.
In real world applications we only have perturbed data, e.g. often contaminated with noise by inaccurate measurement or model errors, such that only inexact data are available. Reconstructing the attenuation from the measurement is a linear ill-posed inverse problem, so any computational inversion method used for tomographic imaging needs to be regularized. Such regularization methods typically aim to minimize a penalty functional consisting of a fidelity term, describing how well the reconstruction fits the data, and a regularization part that incorporates a priori knowledge of object into the reconstruction procedure.
Tatiana Bubba, Luca Ratti, Rasmus Backholm, Salla Latva-Äijö, Alexander Meaney, Siiri Rautio
Electrical Impedance Tomography (EIT) is a non-invasive imaging modality where an unknown physical body is probed with harmless and painless electrical currents via electrodes attached to the surface of the body, and the resulting voltages are measured. The goal is to recover the internal conductivity distribution of the body based on current-to-voltage boundary measurements. EIT has various applications in medical imaging, like respiratory imaging (see picture) or stroke classification and monitoring.
Reconstructions from EIT measurements are typically of low spatial resolution, but are superior in term of contrast to established imaging modalities, such as CT (X-ray tomography) and MRI (magnetic resonance imaging). That means EIT is superior in applications where high differences in conductivities occur, such as stroke (excessive blood or no blood) and respiratory imaging (blood and air).
The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise. This means that small changes in boundary measurements can correspond to large changes in the internal conductivity distribution, and furthermore in the case of EIT, noise in the data is amplified exponentially. Therefore, regularization is needed for the noise-robust recovery of conductivities from the boundary measurements. The image reconstruction task is too nonlinear to be covered by the presently available theory of iterative regularization. A possibility to overcome these obstacles is given by the D-bar methodology in two dimensions, which is a direct inversion method and a proven regularization strategy for the full nonlinear problem.
Rashmi Murthy
In digital photography visible light is captured by a camera and converted into digital data. This image data produced by the camera always contains some noise, blur and/or other distortions. These are caused by various sources in the image acquisition pipeline. The data can be modified to compensate for these errors by applying appropriate mathematical models. Examples of possible image processing tasks include:
Denoising: All photographs contain some amount of noise. The random nature of the photons arriving at the sensor pixels accounts for a big part of the overall noise which is a combination of multiple sources.
The most obvious source of noise to the average photographer is the increased sensitivity level of the sensor (ISO) due to dim lighting conditions. Shooting at high ISO introduces more noise to the image. Denoising an image in a way that respects the original scene gets increasingly difficult the higher the level of the noise is.
In astrophotography multiple exposures are often combined to average out the noise but normally we only have a single noisy image to work with. A simple way to remove noise from a single image is by applying a linear smoothing/averaging filter on the image but this will also smooth the details (edges) of the image. Usually a better way to preserve the edges is a total variation based approach. Deep learning methods have also been applied to denoising and work really well.
Deblurring: Blur in a photograph can have many causes. Focus blur, motion blur, turbulence in the air, optical blur to name the most common ones. Focus blur for example is simply caused by misfocusing the lens so that the scene or object we wanted to capture does not appear sharp in the photograph.
The blurring effect can be modeled with a distortion operator often called point spread function (PSF). Deblurring is a more challenging task than the denoising problem as in addition to the noise we lack high frequency information of the scene. This combination makes deblurring of an image an ill-posed inverse problem. The more accurately we can estimate the PSF the better results we can usually achieve.
Inpainting: Filling in missing parts of an image or getting rid of unwanted parts of an image. Check out the following videos to get a better idea of how this can be done.