Deep Back-Projection Networks For Super-Resolution

Deep Back-Projection Networks For Super-Resolution

7 Mar 2018 | Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita
This article introduces Deep Back-Projection Networks (DBPN) for single-image super-resolution, which address the limitations of existing feed-forward deep networks by incorporating iterative up- and down-sampling stages to provide error feedback. DBPNs are designed to handle the mutual dependencies between low-resolution (LR) and high-resolution (HR) images, allowing for more accurate reconstruction. The proposed networks use a mutually connected structure of up- and down-sampling stages to represent different types of image degradation and HR components. By extending this idea to allow feature concatenation across stages, the authors develop Dense DBPN (D-DBPN), which further improves super-resolution performance. The networks are trained end-to-end, using a combination of convolutional layers and dense connections to enhance feature reuse and performance. Experimental results show that D-DBPN outperforms existing state-of-the-art methods, particularly for large scaling factors such as 8×, achieving higher PSNR and SSIM values. The networks are also shown to be more efficient in terms of parameter usage compared to other methods. The results demonstrate the effectiveness of DBPNs in preserving HR components and generating high-quality images from LR inputs. The article also discusses related work, including feedback networks and adversarial training, and highlights the advantages of the proposed method in terms of performance and efficiency.This article introduces Deep Back-Projection Networks (DBPN) for single-image super-resolution, which address the limitations of existing feed-forward deep networks by incorporating iterative up- and down-sampling stages to provide error feedback. DBPNs are designed to handle the mutual dependencies between low-resolution (LR) and high-resolution (HR) images, allowing for more accurate reconstruction. The proposed networks use a mutually connected structure of up- and down-sampling stages to represent different types of image degradation and HR components. By extending this idea to allow feature concatenation across stages, the authors develop Dense DBPN (D-DBPN), which further improves super-resolution performance. The networks are trained end-to-end, using a combination of convolutional layers and dense connections to enhance feature reuse and performance. Experimental results show that D-DBPN outperforms existing state-of-the-art methods, particularly for large scaling factors such as 8×, achieving higher PSNR and SSIM values. The networks are also shown to be more efficient in terms of parameter usage compared to other methods. The results demonstrate the effectiveness of DBPNs in preserving HR components and generating high-quality images from LR inputs. The article also discusses related work, including feedback networks and adversarial training, and highlights the advantages of the proposed method in terms of performance and efficiency.
Reach us at info@study.space