Brain Tumor Segmentation with Deep Neural Networks

Brain Tumor Segmentation with Deep Neural Networks

May 23, 2016 | Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, Hugo Larochelle
This paper presents a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. These tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast, motivating the use of a flexible, high capacity DNN. The paper explores different model choices necessary for competitive performance, including different CNN architectures. A novel CNN architecture is introduced that simultaneously exploits local and global contextual features. A final layer that is a convolutional implementation of a fully connected layer allows a 40-fold speed up. A two-phase training procedure is also described to tackle label imbalance. A cascade architecture is explored where the output of a basic CNN is treated as additional information for a subsequent CNN. Results on the 2013 BRATS test dataset show the architecture improves over the current state-of-the-art while being over 30 times faster. The paper discusses related work in brain tumor segmentation, including generative and discriminative models. It highlights the challenges of segmenting brain tumors, including their diffuse and poorly contrasted nature, and the need for multi-modal MRI data. The paper proposes a CNN-based approach that learns feature hierarchies adapted to brain tumor segmentation, combining information across MRI modalities. The paper describes a CNN architecture that processes 2D axial images slice by slice, using multiple modalities (T1, T2, T1C, FLAIR). The architecture includes a convolutional layer, non-linear activation function, and max pooling. The model uses a two-pathway architecture that learns both local details and larger context. A two-phase training procedure is used to handle imbalanced label distributions. A cascaded architecture is also proposed as an efficient alternative to structured output methods. The paper presents implementation details, including the use of the Pylearn2 library, pre-processing steps, and hyper-parameter tuning. The model is evaluated on the 2013 BRATS dataset, showing improved performance over existing methods. The results demonstrate that the proposed architecture achieves high accuracy and efficiency, with the TwoPathCNN model achieving a rank of 4 on the BRATS leaderboard. The model is able to segment brain tumors in 25 seconds per brain, which is one order of magnitude faster than most state-of-the-art methods. The paper also discusses the impact of different training phases and architectures on segmentation performance.This paper presents a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. These tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast, motivating the use of a flexible, high capacity DNN. The paper explores different model choices necessary for competitive performance, including different CNN architectures. A novel CNN architecture is introduced that simultaneously exploits local and global contextual features. A final layer that is a convolutional implementation of a fully connected layer allows a 40-fold speed up. A two-phase training procedure is also described to tackle label imbalance. A cascade architecture is explored where the output of a basic CNN is treated as additional information for a subsequent CNN. Results on the 2013 BRATS test dataset show the architecture improves over the current state-of-the-art while being over 30 times faster. The paper discusses related work in brain tumor segmentation, including generative and discriminative models. It highlights the challenges of segmenting brain tumors, including their diffuse and poorly contrasted nature, and the need for multi-modal MRI data. The paper proposes a CNN-based approach that learns feature hierarchies adapted to brain tumor segmentation, combining information across MRI modalities. The paper describes a CNN architecture that processes 2D axial images slice by slice, using multiple modalities (T1, T2, T1C, FLAIR). The architecture includes a convolutional layer, non-linear activation function, and max pooling. The model uses a two-pathway architecture that learns both local details and larger context. A two-phase training procedure is used to handle imbalanced label distributions. A cascaded architecture is also proposed as an efficient alternative to structured output methods. The paper presents implementation details, including the use of the Pylearn2 library, pre-processing steps, and hyper-parameter tuning. The model is evaluated on the 2013 BRATS dataset, showing improved performance over existing methods. The results demonstrate that the proposed architecture achieves high accuracy and efficiency, with the TwoPathCNN model achieving a rank of 4 on the BRATS leaderboard. The model is able to segment brain tumors in 25 seconds per brain, which is one order of magnitude faster than most state-of-the-art methods. The paper also discusses the impact of different training phases and architectures on segmentation performance.
Reach us at info@study.space
Understanding Brain tumor segmentation with Deep Neural Networks