30 Oct 2018 | Golnaz Ghiassi, Tsung-Yi Lin, Quoc V. Le
DropBlock is a structured dropout method designed to improve the regularization of convolutional networks. Unlike traditional dropout, which randomly drops individual units, DropBlock drops contiguous regions of a feature map, making it more effective for convolutional layers where activation units are spatially correlated. This approach helps prevent overfitting by forcing the network to learn more robust features. Experiments show that DropBlock outperforms dropout in tasks such as ImageNet classification and COCO detection. For example, ResNet-50 with DropBlock achieves 78.13% accuracy on ImageNet, a 1.6% improvement over the baseline. On COCO detection, DropBlock improves the Average Precision of RetinaNet from 36.8% to 38.4%. DropBlock is also effective in semantic segmentation and object detection tasks. The method is applied to various architectures, including AmoebaNet, where it improves accuracy. DropBlock is trained with a gradually increasing number of dropped units during training, leading to better performance and robustness. The method is also shown to be more effective than other regularization techniques such as SpatialDropout and Cutout. DropBlock is implemented with a block size and keep probability, and its effectiveness is demonstrated through extensive experiments across different models and datasets. The results indicate that DropBlock is a more effective regularizer for convolutional networks compared to traditional dropout methods.DropBlock is a structured dropout method designed to improve the regularization of convolutional networks. Unlike traditional dropout, which randomly drops individual units, DropBlock drops contiguous regions of a feature map, making it more effective for convolutional layers where activation units are spatially correlated. This approach helps prevent overfitting by forcing the network to learn more robust features. Experiments show that DropBlock outperforms dropout in tasks such as ImageNet classification and COCO detection. For example, ResNet-50 with DropBlock achieves 78.13% accuracy on ImageNet, a 1.6% improvement over the baseline. On COCO detection, DropBlock improves the Average Precision of RetinaNet from 36.8% to 38.4%. DropBlock is also effective in semantic segmentation and object detection tasks. The method is applied to various architectures, including AmoebaNet, where it improves accuracy. DropBlock is trained with a gradually increasing number of dropped units during training, leading to better performance and robustness. The method is also shown to be more effective than other regularization techniques such as SpatialDropout and Cutout. DropBlock is implemented with a block size and keep probability, and its effectiveness is demonstrated through extensive experiments across different models and datasets. The results indicate that DropBlock is a more effective regularizer for convolutional networks compared to traditional dropout methods.