17 Nov 2017 | Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, Ming-Hsuan Yang
The paper "Universal Style Transfer via Feature Transforms" by Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang introduces a novel method for universal style transfer that does not require training on predefined styles. The key innovation is the use of feature transforms, specifically whitening and coloring, which are embedded in an image reconstruction network. These transforms directly match the feature covariance of the content image to that of a given style image, similar to the optimization of Gram matrix-based costs in neural style transfer. The method is designed to be efficient and effective, achieving high-quality stylized images with minimal computational overhead. The authors demonstrate the effectiveness of their algorithm through comparisons with several recent methods and show its applicability to universal texture synthesis. The main contributions include the use of feature transforms, the integration of these transforms into a pre-trained encoder-decoder network, and the development of a multi-level stylization pipeline for improved results. The proposed approach is evaluated on various datasets and shown to outperform existing methods in terms of generalization, visual quality, and efficiency.The paper "Universal Style Transfer via Feature Transforms" by Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang introduces a novel method for universal style transfer that does not require training on predefined styles. The key innovation is the use of feature transforms, specifically whitening and coloring, which are embedded in an image reconstruction network. These transforms directly match the feature covariance of the content image to that of a given style image, similar to the optimization of Gram matrix-based costs in neural style transfer. The method is designed to be efficient and effective, achieving high-quality stylized images with minimal computational overhead. The authors demonstrate the effectiveness of their algorithm through comparisons with several recent methods and show its applicability to universal texture synthesis. The main contributions include the use of feature transforms, the integration of these transforms into a pre-trained encoder-decoder network, and the development of a multi-level stylization pipeline for improved results. The proposed approach is evaluated on various datasets and shown to outperform existing methods in terms of generalization, visual quality, and efficiency.