Model based learning for accelerated, limited-view 3D photoacoustic tomography
In photoacoustic tomography we aim at obtaining high resolution 3D images of optical absorption by sensing laser-generated ultrasound (US). In many practical applications, this spatial sampling of the US signal is not optimal to obtain high quality reconstructions with fast, filteredback-projection-like image reconstruction methods: limited view artefacts arise from geometric restrictions and spatial undersampling, which is performed to accelerate the data acquisition. Iterative image reconstruction methods that employ an explicit model of the US propagation in combination with spatial sparsity constraints can provide significantly better results in these situations. However, a crucial drawback of these methods is their considerably higher computational complexity and the difficulty to handcraft sparsity constraints that capture the spatial structure of the target. Recent advances in deep learning for tomographic reconstructions have shown great potential to create such realistic high quality images with considerable speed-up. In this work we present a deep neural network that is specifically designed to provide high resolution 3D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artefacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. Aa suitable prior for the desired images structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung CT scans and then applied to real measurement data.