Image Super-Resolution Using Very Deep Residual Channel Attention Networks

Image Super-Resolution Using Very Deep Residual Channel Attention Networks

12 Jul 2018 | Yulun Zhang, Kumpeng Li, Kai Li, Lichen Wang, Bineng Zhong, Yun Fu
The paper addresses the problem of image super-resolution (SR) by proposing a very deep residual channel attention network (RCAN). The authors observe that deeper networks for SR are more challenging to train due to the abundance of low-frequency information in low-resolution (LR) inputs, which can hinder the representational ability of convolutional neural networks (CNNs). To overcome this, they introduce a residual in residual (RIR) structure, which consists of multiple residual groups with long skip connections and short skip connections. This structure allows low-frequency information to be bypassed, enabling the network to focus on high-frequency information. Additionally, they propose a channel attention (CA) mechanism to adaptively rescale channel-wise features, enhancing the network's ability to learn discriminative features. Extensive experiments on various datasets and degradation models demonstrate that RCAN achieves better accuracy and visual improvements compared to state-of-the-art methods. The contributions of the paper include the introduction of RCAN, the RIR structure, and the CA mechanism, which collectively improve the performance and representational power of deep networks for image SR.The paper addresses the problem of image super-resolution (SR) by proposing a very deep residual channel attention network (RCAN). The authors observe that deeper networks for SR are more challenging to train due to the abundance of low-frequency information in low-resolution (LR) inputs, which can hinder the representational ability of convolutional neural networks (CNNs). To overcome this, they introduce a residual in residual (RIR) structure, which consists of multiple residual groups with long skip connections and short skip connections. This structure allows low-frequency information to be bypassed, enabling the network to focus on high-frequency information. Additionally, they propose a channel attention (CA) mechanism to adaptively rescale channel-wise features, enhancing the network's ability to learn discriminative features. Extensive experiments on various datasets and degradation models demonstrate that RCAN achieves better accuracy and visual improvements compared to state-of-the-art methods. The contributions of the paper include the introduction of RCAN, the RIR structure, and the CA mechanism, which collectively improve the performance and representational power of deep networks for image SR.
Reach us at info@study.space