Perceptual Losses for Real-Time Style Transfer and Super-Resolution

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

27 Mar 2016 | Justin Johnson, Alexandre Alahi, Li Fei-Fei
The paper "Perceptual Losses for Real-Time Style Transfer and Super-Resolution" by Justin Johnson, Alexandre Alahi, and Li Fei-Fei explores the use of perceptual loss functions in training feed-forward convolutional neural networks for image transformation tasks. The authors combine the benefits of per-pixel losses, which are efficient at test-time, with perceptual losses, which capture high-level semantic and perceptual differences between images. They propose using a pre-trained loss network to define perceptual loss functions, which are then used to train feed-forward transformation networks. The paper demonstrates the effectiveness of this approach in two tasks: style transfer and single-image super-resolution. For style transfer, the network achieves similar qualitative results to existing optimization-based methods but is significantly faster. For super-resolution, the network trained with a perceptual loss function outperforms per-pixel loss methods in reconstructing fine details and edges. The authors conclude by discussing future directions, including the application of perceptual loss functions to other image transformation tasks and the exploration of different loss networks.The paper "Perceptual Losses for Real-Time Style Transfer and Super-Resolution" by Justin Johnson, Alexandre Alahi, and Li Fei-Fei explores the use of perceptual loss functions in training feed-forward convolutional neural networks for image transformation tasks. The authors combine the benefits of per-pixel losses, which are efficient at test-time, with perceptual losses, which capture high-level semantic and perceptual differences between images. They propose using a pre-trained loss network to define perceptual loss functions, which are then used to train feed-forward transformation networks. The paper demonstrates the effectiveness of this approach in two tasks: style transfer and single-image super-resolution. For style transfer, the network achieves similar qualitative results to existing optimization-based methods but is significantly faster. For super-resolution, the network trained with a perceptual loss function outperforms per-pixel loss methods in reconstructing fine details and edges. The authors conclude by discussing future directions, including the application of perceptual loss functions to other image transformation tasks and the exploration of different loss networks.
Reach us at info@study.space
[slides] Perceptual Losses for Real-Time Style Transfer and Super-Resolution | StudySpace