VoxelMorph: A Learning Framework for Deformable Medical Image Registration

VoxelMorph: A Learning Framework for Deformable Medical Image Registration

1 Sep 2019 | Guha Balakrishnan, Amy Zhao, Mert R. Sabuncu, John Guttag, and Adrian V. Dalca
VoxelMorph is a fast learning-based framework for deformable medical image registration. It learns a parametrized registration function from a collection of volumes, using a convolutional neural network (CNN) to map input image pairs to a deformation field that aligns them. The function is optimized using a training set of volumes, enabling rapid registration of new pairs by directly evaluating the function. Two training strategies are explored: an unsupervised approach that maximizes image matching objectives based on intensities, and a supervised approach that leverages anatomical segmentations during training. The unsupervised model achieves comparable accuracy to state-of-the-art methods but is significantly faster. VoxelMorph trained with auxiliary data improves registration accuracy at test time, and the effect of training set size on registration is evaluated. The method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. The code is freely available at http://voxelmorph.csail.mit.edu. The paper extends a preliminary version of the work presented at the 2018 International Conference on Computer Vision and Pattern Recognition. It builds on that work by expanding analyses and introducing an auxiliary learning model that can use anatomical segmentations during training to improve registration on new test image pairs for which segmentation maps are not available. The paper is organized as follows. Section 2 introduces medical image registration and Section 3 describes related work. Section 4 presents our methods. Section 5 presents experimental results on MRI data. We discuss insights of the results and conclude in Section 6. In traditional volume registration, one volume is warped to align with another. Deformable registration enables comparison of structures between scans. Such analyses are useful for understanding variability across populations or the evolution of brain anatomy over time for individuals with disease. Deformable registration strategies often involve two steps: an initial affine transformation for global alignment, followed by a much slower deformable transformation with more degrees of freedom. The paper focuses on the latter step, in which a dense, nonlinear correspondence is computed for all voxels. Most existing deformable registration algorithms iteratively optimize a transformation based on an energy function. The optimization problem is written as: $\widehat{\phi}=\underset{\phi}{\arg\min}\mathcal{L}(f,m,\phi)$. The function $\mathcal{L}(f,m,\phi)$ measures image similarity between its two inputs and imposes regularization. The paper presents two loss functions: an unsupervised loss that evaluates the model using only the input volumes and generated registration field, and an auxiliary loss that also leverages anatomical segmentations at training time. The paper proposes two loss functions: an unsupervised loss that evaluates the model using only the input volumes and generated registration field, and an auxiliary loss that also leverages anatomical segmentations at training time. The unsupervised loss consists of two components: $L_{sim}$ that penalizesVoxelMorph is a fast learning-based framework for deformable medical image registration. It learns a parametrized registration function from a collection of volumes, using a convolutional neural network (CNN) to map input image pairs to a deformation field that aligns them. The function is optimized using a training set of volumes, enabling rapid registration of new pairs by directly evaluating the function. Two training strategies are explored: an unsupervised approach that maximizes image matching objectives based on intensities, and a supervised approach that leverages anatomical segmentations during training. The unsupervised model achieves comparable accuracy to state-of-the-art methods but is significantly faster. VoxelMorph trained with auxiliary data improves registration accuracy at test time, and the effect of training set size on registration is evaluated. The method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. The code is freely available at http://voxelmorph.csail.mit.edu. The paper extends a preliminary version of the work presented at the 2018 International Conference on Computer Vision and Pattern Recognition. It builds on that work by expanding analyses and introducing an auxiliary learning model that can use anatomical segmentations during training to improve registration on new test image pairs for which segmentation maps are not available. The paper is organized as follows. Section 2 introduces medical image registration and Section 3 describes related work. Section 4 presents our methods. Section 5 presents experimental results on MRI data. We discuss insights of the results and conclude in Section 6. In traditional volume registration, one volume is warped to align with another. Deformable registration enables comparison of structures between scans. Such analyses are useful for understanding variability across populations or the evolution of brain anatomy over time for individuals with disease. Deformable registration strategies often involve two steps: an initial affine transformation for global alignment, followed by a much slower deformable transformation with more degrees of freedom. The paper focuses on the latter step, in which a dense, nonlinear correspondence is computed for all voxels. Most existing deformable registration algorithms iteratively optimize a transformation based on an energy function. The optimization problem is written as: $\widehat{\phi}=\underset{\phi}{\arg\min}\mathcal{L}(f,m,\phi)$. The function $\mathcal{L}(f,m,\phi)$ measures image similarity between its two inputs and imposes regularization. The paper presents two loss functions: an unsupervised loss that evaluates the model using only the input volumes and generated registration field, and an auxiliary loss that also leverages anatomical segmentations at training time. The paper proposes two loss functions: an unsupervised loss that evaluates the model using only the input volumes and generated registration field, and an auxiliary loss that also leverages anatomical segmentations at training time. The unsupervised loss consists of two components: $L_{sim}$ that penalizes
Reach us at info@study.space
[slides] VoxelMorph%3A A Learning Framework for Deformable Medical Image Registration | StudySpace