Residual Dense Network for Image Super-Resolution

Residual Dense Network for Image Super-Resolution

27 Mar 2018 | Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu
This paper proposes a residual dense network (RDN) for image super-resolution (SR), which fully exploits hierarchical features from the original low-resolution (LR) images. The RDN consists of residual dense blocks (RDBs), which extract abundant local features through dense connected convolutional layers and allow direct connections from the state of preceding RDBs to all layers of the current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion (LFF) is then used to adaptively learn more effective features from preceding and current local features and stabilize the training of wider networks. After fully obtaining dense local features, global feature fusion (GFF) is used to adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that RDN achieves favorable performance against state-of-the-art methods. The RDN is designed to extract and adaptively fuse features from all layers in the LR space efficiently. It uses a shallow feature extraction net (SFENet), residual dense blocks (RDBs), dense feature fusion (DFF), and finally an up-sampling net (UPNet). The RDBs contain dense connected layers, local feature fusion (LFF), and local residual learning (LRL), leading to a contiguous memory (CM) mechanism. The output of one RDB has direct access to each layer of the next RDB, resulting in a contiguous state pass. Each convolutional layer in RDB has access to all subsequent layers and passes on information that needs to be preserved. LFF extracts local dense features by adaptively preserving the information. Moreover, LFF allows very high growth rate by stabilizing the training of wider networks. After extracting multi-level local dense features, GFF is used to adaptively preserve the hierarchical features in a global way. The RDN is compared with other state-of-the-art methods on benchmark datasets with different degradation models, including BI, BD, and DN. The results show that RDN achieves the best performance on all datasets with all scaling factors. The RDN is also effective for real-world images where the original HR images are not available and the degradation model is unknown. The results demonstrate the effectiveness and robustness of the RDN model.This paper proposes a residual dense network (RDN) for image super-resolution (SR), which fully exploits hierarchical features from the original low-resolution (LR) images. The RDN consists of residual dense blocks (RDBs), which extract abundant local features through dense connected convolutional layers and allow direct connections from the state of preceding RDBs to all layers of the current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion (LFF) is then used to adaptively learn more effective features from preceding and current local features and stabilize the training of wider networks. After fully obtaining dense local features, global feature fusion (GFF) is used to adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that RDN achieves favorable performance against state-of-the-art methods. The RDN is designed to extract and adaptively fuse features from all layers in the LR space efficiently. It uses a shallow feature extraction net (SFENet), residual dense blocks (RDBs), dense feature fusion (DFF), and finally an up-sampling net (UPNet). The RDBs contain dense connected layers, local feature fusion (LFF), and local residual learning (LRL), leading to a contiguous memory (CM) mechanism. The output of one RDB has direct access to each layer of the next RDB, resulting in a contiguous state pass. Each convolutional layer in RDB has access to all subsequent layers and passes on information that needs to be preserved. LFF extracts local dense features by adaptively preserving the information. Moreover, LFF allows very high growth rate by stabilizing the training of wider networks. After extracting multi-level local dense features, GFF is used to adaptively preserve the hierarchical features in a global way. The RDN is compared with other state-of-the-art methods on benchmark datasets with different degradation models, including BI, BD, and DN. The results show that RDN achieves the best performance on all datasets with all scaling factors. The RDN is also effective for real-world images where the original HR images are not available and the degradation model is unknown. The results demonstrate the effectiveness and robustness of the RDN model.
Reach us at info@study.space
[slides] Residual Dense Network for Image Super-Resolution | StudySpace