22 Sep 2016 | Michal Drozdzal1,2.*, Eugene Vorontsov1,2.*, Gabriel Chartrand1,3, Samuel Kadoury2,4, and Chris Pal2,5
This paper investigates the impact of both long and short skip connections on Fully Convolutional Networks (FCNs) for biomedical image segmentation. The authors extend FCNs by adding short skip connections, similar to those in ResNets, to build very deep FCNs. They show that both long and short skip connections are beneficial for training deep FCNs, as confirmed by gradient flow analysis. The study demonstrates that a very deep FCN can achieve near-state-of-the-art results on the EM dataset without further post-processing.
The paper presents a residual network for semantic image segmentation, extending ResNets to segmentation tasks by adding an expanding path. The network uses different types of blocks (bottleneck, basic, and simple) that allow for spatial downsampling and upsampling. The authors experimented with binary cross-entropy and dice loss functions, finding that the dice loss produced cleaner segmentation results. They also found that using dropout during testing improved performance.
The experiments were conducted on EM data, with 30 training images and 30 test images. The model was trained with data augmentation techniques such as random flipping, shearing, and rotations. The results showed that the model with both long and short skip connections performed best, converging faster and achieving better accuracy than models with only one type of skip connection. The authors also found that short skip connections helped stabilize parameter updates, especially in deep networks.
The study concludes that short skip connections are essential for training deep FCNs, as they help alleviate the vanishing gradient problem and improve convergence speed. The results show that a very deep FCN can achieve near-state-of-the-art performance on the EM dataset without further post-processing. The authors also note that the use of batch normalization increases the maximal updatable depth of the network.This paper investigates the impact of both long and short skip connections on Fully Convolutional Networks (FCNs) for biomedical image segmentation. The authors extend FCNs by adding short skip connections, similar to those in ResNets, to build very deep FCNs. They show that both long and short skip connections are beneficial for training deep FCNs, as confirmed by gradient flow analysis. The study demonstrates that a very deep FCN can achieve near-state-of-the-art results on the EM dataset without further post-processing.
The paper presents a residual network for semantic image segmentation, extending ResNets to segmentation tasks by adding an expanding path. The network uses different types of blocks (bottleneck, basic, and simple) that allow for spatial downsampling and upsampling. The authors experimented with binary cross-entropy and dice loss functions, finding that the dice loss produced cleaner segmentation results. They also found that using dropout during testing improved performance.
The experiments were conducted on EM data, with 30 training images and 30 test images. The model was trained with data augmentation techniques such as random flipping, shearing, and rotations. The results showed that the model with both long and short skip connections performed best, converging faster and achieving better accuracy than models with only one type of skip connection. The authors also found that short skip connections helped stabilize parameter updates, especially in deep networks.
The study concludes that short skip connections are essential for training deep FCNs, as they help alleviate the vanishing gradient problem and improve convergence speed. The results show that a very deep FCN can achieve near-state-of-the-art performance on the EM dataset without further post-processing. The authors also note that the use of batch normalization increases the maximal updatable depth of the network.