18 Jul 2018 | Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang
The paper introduces UNet++, a novel architecture for medical image segmentation that enhances the performance of U-Net and wide U-Net. UNet++ is designed to bridge the semantic gap between the encoder and decoder sub-networks through a series of nested, dense skip pathways. These pathways transform the feature maps to align more semantically, making the optimization task easier for the model. The architecture also incorporates deep supervision, allowing the model to operate in both accurate and fast modes, with the latter enabling model pruning and speed gains. Experimental results on four medical datasets (nodule segmentation in chest CT scans, nuclei segmentation in microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos) show that UNet++ achieves significant improvements in Intersection over Union (IoU) compared to U-Net and wide U-Net, with an average gain of 3.9 and 3.4 points, respectively. The paper also discusses the effectiveness of deep supervision in different tasks and the impact of model pruning on performance and inference time.The paper introduces UNet++, a novel architecture for medical image segmentation that enhances the performance of U-Net and wide U-Net. UNet++ is designed to bridge the semantic gap between the encoder and decoder sub-networks through a series of nested, dense skip pathways. These pathways transform the feature maps to align more semantically, making the optimization task easier for the model. The architecture also incorporates deep supervision, allowing the model to operate in both accurate and fast modes, with the latter enabling model pruning and speed gains. Experimental results on four medical datasets (nodule segmentation in chest CT scans, nuclei segmentation in microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos) show that UNet++ achieves significant improvements in Intersection over Union (IoU) compared to U-Net and wide U-Net, with an average gain of 3.9 and 3.4 points, respectively. The paper also discusses the effectiveness of deep supervision in different tasks and the impact of model pruning on performance and inference time.