21 Jun 2016 | Özgün Çiçek1,2, Ahmed Abdulkadir1,4, Soeren S. Lienkamp2,3, Thomas Brox1,2, and Olaf Ronneberger1,2,5
This paper introduces a 3D U-Net architecture for volumetric segmentation, designed to learn from sparsely annotated 3D images. The method is applicable in two scenarios: semi-automated and fully-automated segmentation. In the semi-automated setup, users annotate a few slices, and the network provides a dense 3D segmentation. In the fully-automated setup, the network is trained on a sparsely annotated training set and can segment new 3D images. The 3D U-Net extends the 2D U-Net by replacing all 2D operations with 3D counterparts, including 3D convolutions, max pooling, and up-convolutional layers. The network uses batch normalization and performs on-the-fly elastic deformations for efficient data augmentation. It is trained end-to-end from scratch without pre-trained models. The method is tested on the complex Xenopus kidney structure, achieving good results in both scenarios. The implementation is provided as open-source.This paper introduces a 3D U-Net architecture for volumetric segmentation, designed to learn from sparsely annotated 3D images. The method is applicable in two scenarios: semi-automated and fully-automated segmentation. In the semi-automated setup, users annotate a few slices, and the network provides a dense 3D segmentation. In the fully-automated setup, the network is trained on a sparsely annotated training set and can segment new 3D images. The 3D U-Net extends the 2D U-Net by replacing all 2D operations with 3D counterparts, including 3D convolutions, max pooling, and up-convolutional layers. The network uses batch normalization and performs on-the-fly elastic deformations for efficient data augmentation. It is trained end-to-end from scratch without pre-trained models. The method is tested on the complex Xenopus kidney structure, achieving good results in both scenarios. The implementation is provided as open-source.