The paper introduces a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between low- and high-resolution images using a deep convolutional neural network (CNN). The authors demonstrate that traditional sparse-coding-based SR methods can be viewed as a special case of a deep CNN, but their method jointly optimizes all layers, achieving superior restoration quality and faster speed. The proposed Super-Resolution Convolutional Neural Network (SRCNN) has a lightweight structure and is designed to be simple yet effective. Experiments show that the SRCNN outperforms existing state-of-the-art methods in terms of both quantitative metrics (PSNR, SSIM) and visual quality. The method is also extended to handle color images, achieving better overall reconstruction quality. The paper discusses the trade-offs between performance and speed, and explores different network structures and parameter settings. The authors conclude that their approach is robust and can be applied to other low-level vision problems.The paper introduces a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between low- and high-resolution images using a deep convolutional neural network (CNN). The authors demonstrate that traditional sparse-coding-based SR methods can be viewed as a special case of a deep CNN, but their method jointly optimizes all layers, achieving superior restoration quality and faster speed. The proposed Super-Resolution Convolutional Neural Network (SRCNN) has a lightweight structure and is designed to be simple yet effective. Experiments show that the SRCNN outperforms existing state-of-the-art methods in terms of both quantitative metrics (PSNR, SSIM) and visual quality. The method is also extended to handle color images, achieving better overall reconstruction quality. The paper discusses the trade-offs between performance and speed, and explores different network structures and parameter settings. The authors conclude that their approach is robust and can be applied to other low-level vision problems.