10 Mar 2016 | Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, Victor Lempitsky
The paper introduces a novel approach to texture synthesis and image stylization using feed-forward convolutional networks. Unlike previous methods that rely on iterative optimization, this approach trains compact feed-forward networks to generate multiple samples of a texture of arbitrary size and to transfer artistic style from one image to another. The resulting networks are significantly faster and more memory-efficient, achieving comparable quality to existing methods but with a speed improvement of several orders of magnitude. The authors demonstrate the effectiveness of their approach through extensive experiments, showing that it can handle a wide range of textures and styles, and achieving real-time performance for video applications. The paper also discusses the architecture and training process of the proposed texture networks, highlighting the use of complex loss functions and multi-scale processing to achieve high-quality results.The paper introduces a novel approach to texture synthesis and image stylization using feed-forward convolutional networks. Unlike previous methods that rely on iterative optimization, this approach trains compact feed-forward networks to generate multiple samples of a texture of arbitrary size and to transfer artistic style from one image to another. The resulting networks are significantly faster and more memory-efficient, achieving comparable quality to existing methods but with a speed improvement of several orders of magnitude. The authors demonstrate the effectiveness of their approach through extensive experiments, showing that it can handle a wide range of textures and styles, and achieving real-time performance for video applications. The paper also discusses the architecture and training process of the proposed texture networks, highlighting the use of complex loss functions and multi-scale processing to achieve high-quality results.