Attention U-Net: Learning Where to Look for the Pancreas

Attention U-Net: Learning Where to Look for the Pancreas

20 May 2018 | Ozan Oktay1,5, Jo Schlemper1, Loic Le Folgoc1, Matthew Lee4, Mattias Heinrich3, Kazumari Misawa2, Kensaku Mori2, Steven McDonagh1, Nils Y Hammerla5, Bernhard Kainz1, Ben Glocker1, and Daniel Rueckert1
The paper introduces Attention U-Net, a novel architecture that integrates attention gates (AGs) into standard CNN models for medical image segmentation. AGs automatically learn to focus on target structures, improving model sensitivity and prediction accuracy without requiring explicit external localization modules. The proposed architecture is evaluated on two large CT abdominal datasets for multi-class segmentation. Experimental results show that AGs consistently improve performance across different datasets and training sizes while maintaining computational efficiency. The AGs are implemented in a standard U-Net architecture and applied to medical images, particularly for CT pancreas segmentation, which is challenging due to low tissue contrast and shape variability. The model outperforms traditional methods, achieving state-of-the-art performance without requiring multiple CNN models. The AGs are trained using standard back-propagation and do not require additional supervision. They generate soft region proposals and highlight relevant features for specific tasks. The attention mechanism is applied to image grids, allowing attention coefficients to be specific to local regions. The model is evaluated on two benchmarks: TCIA Pancreas CT-82 and multi-class abdominal CT-150. Results show that AGs improve segmentation accuracy and reduce surface distances. The model is also compared to state-of-the-art methods, demonstrating superior performance in pancreas segmentation. The proposed approach is generic and can be applied to various image analysis tasks. The source code is publicly available. The paper highlights the effectiveness of AGs in improving segmentation accuracy and reducing computational overhead. The model is efficient and can be used for dense label predictions. The results show that AGs improve performance across different datasets and training sizes. The model is also compared to other segmentation frameworks, demonstrating its effectiveness in improving segmentation accuracy. The paper concludes that the proposed attention gate model is a promising approach for medical image segmentation.The paper introduces Attention U-Net, a novel architecture that integrates attention gates (AGs) into standard CNN models for medical image segmentation. AGs automatically learn to focus on target structures, improving model sensitivity and prediction accuracy without requiring explicit external localization modules. The proposed architecture is evaluated on two large CT abdominal datasets for multi-class segmentation. Experimental results show that AGs consistently improve performance across different datasets and training sizes while maintaining computational efficiency. The AGs are implemented in a standard U-Net architecture and applied to medical images, particularly for CT pancreas segmentation, which is challenging due to low tissue contrast and shape variability. The model outperforms traditional methods, achieving state-of-the-art performance without requiring multiple CNN models. The AGs are trained using standard back-propagation and do not require additional supervision. They generate soft region proposals and highlight relevant features for specific tasks. The attention mechanism is applied to image grids, allowing attention coefficients to be specific to local regions. The model is evaluated on two benchmarks: TCIA Pancreas CT-82 and multi-class abdominal CT-150. Results show that AGs improve segmentation accuracy and reduce surface distances. The model is also compared to state-of-the-art methods, demonstrating superior performance in pancreas segmentation. The proposed approach is generic and can be applied to various image analysis tasks. The source code is publicly available. The paper highlights the effectiveness of AGs in improving segmentation accuracy and reducing computational overhead. The model is efficient and can be used for dense label predictions. The results show that AGs improve performance across different datasets and training sizes. The model is also compared to other segmentation frameworks, demonstrating its effectiveness in improving segmentation accuracy. The paper concludes that the proposed attention gate model is a promising approach for medical image segmentation.
Reach us at info@study.space
[slides and audio] Attention U-Net%3A Learning Where to Look for the Pancreas