Dunhuang murals image restoration method based on generative adversarial network

Dunhuang murals image restoration method based on generative adversarial network

2024 | Hui Ren, Ke Sun, Fanhua Zhao, Xian Zhu
This paper proposes a generative adversarial network (GAN) method for restoring damaged Dunhuang murals. The method combines a parallel dual convolutional feature extraction depth generator with a ternary heterogeneous joint discriminator. The generator network uses parallel extraction of image features via vanilla and dilated convolution to capture multi-scale features, while the discriminator includes a pixel-level discriminator to identify defects at the pixel level and a global and local discriminator to assess the generated image at different levels. The method is validated on a newly created Dunhuang murals dataset, showing significant improvements in PSNR and SSIM compared to existing methods. The restored images are more aligned with human perception, achieving effective restoration of mural images. The method addresses the challenges of incomplete restoration and local detail distortion in traditional digital restoration techniques. The generator network is designed with a parallel dual convolutional feature extraction mechanism to reduce information loss during image restoration. The discriminator includes a pixel-level discriminator to identify pixel-level defects and a global and local discriminator to assess the generated image at different levels. The method is validated on a newly created Dunhuang murals dataset, showing significant improvements in PSNR and SSIM compared to existing methods. The restored images are more aligned with human perception, achieving effective restoration of mural images. The method addresses the challenges of incomplete restoration and local detail distortion in traditional digital restoration techniques. The generator network is designed with a parallel dual convolutional feature extraction mechanism to reduce information loss during image restoration. The discriminator includes a pixel-level discriminator to identify pixel-level defects and a global and local discriminator to assess the generated image at different levels. The method is validated on a newly created Dunhuang murals dataset, showing significant improvements in PSNR and SSIM compared to existing methods. The restored images are more aligned with human perception, achieving effective restoration of mural images. The method addresses the challenges of incomplete restoration and local detail distortion in traditional digital restoration techniques.This paper proposes a generative adversarial network (GAN) method for restoring damaged Dunhuang murals. The method combines a parallel dual convolutional feature extraction depth generator with a ternary heterogeneous joint discriminator. The generator network uses parallel extraction of image features via vanilla and dilated convolution to capture multi-scale features, while the discriminator includes a pixel-level discriminator to identify defects at the pixel level and a global and local discriminator to assess the generated image at different levels. The method is validated on a newly created Dunhuang murals dataset, showing significant improvements in PSNR and SSIM compared to existing methods. The restored images are more aligned with human perception, achieving effective restoration of mural images. The method addresses the challenges of incomplete restoration and local detail distortion in traditional digital restoration techniques. The generator network is designed with a parallel dual convolutional feature extraction mechanism to reduce information loss during image restoration. The discriminator includes a pixel-level discriminator to identify pixel-level defects and a global and local discriminator to assess the generated image at different levels. The method is validated on a newly created Dunhuang murals dataset, showing significant improvements in PSNR and SSIM compared to existing methods. The restored images are more aligned with human perception, achieving effective restoration of mural images. The method addresses the challenges of incomplete restoration and local detail distortion in traditional digital restoration techniques. The generator network is designed with a parallel dual convolutional feature extraction mechanism to reduce information loss during image restoration. The discriminator includes a pixel-level discriminator to identify pixel-level defects and a global and local discriminator to assess the generated image at different levels. The method is validated on a newly created Dunhuang murals dataset, showing significant improvements in PSNR and SSIM compared to existing methods. The restored images are more aligned with human perception, achieving effective restoration of mural images. The method addresses the challenges of incomplete restoration and local detail distortion in traditional digital restoration techniques.
Reach us at info@study.space