This paper introduces Markovian Generative Adversarial Networks (MGANs), a method for efficient texture synthesis using precomputed feedforward convolutional networks. Traditional deep learning approaches for texture synthesis are computationally expensive, often requiring minutes for low-resolution images. MGANs address this by precomputing a convolutional network that captures feature statistics of Markovian patches, enabling direct generation of outputs of arbitrary dimensions. This network can convert brown noise into realistic textures or photos into artistic paintings, achieving quality comparable to recent neural methods. The precomputed network eliminates the need for optimization during generation, significantly improving run-time performance (0.25M pixel images at 25Hz), surpassing previous methods by up to 500 times. The approach is applied to texture synthesis, style transfer, and video stylization.
The paper discusses the limitations of previous methods, such as deconvolutional networks, which are slow and require iterative back-propagation. MGANs use adversarial training to maintain image quality while improving speed. The model is trained using a discriminative network (D) and a generative network (G), with D trained to distinguish between real and synthesized patches. G decodes feature maps from VGG-19 to generate images. The model is evaluated on various tasks, including unguided and guided texture synthesis, showing high-quality results with significant speed improvements.
Experiments demonstrate that MGANs outperform previous methods in speed and quality, with results comparable to state-of-the-art methods. The model is efficient, with a precomputed feedforward network that allows rapid synthesis of textures and images. The paper also discusses the limitations of the approach, such as its performance on non-texture data and the need for semantic understanding in certain tasks. Overall, MGANs offer a fast and effective solution for texture synthesis and style transfer, with potential applications in artistic image and video stylization.This paper introduces Markovian Generative Adversarial Networks (MGANs), a method for efficient texture synthesis using precomputed feedforward convolutional networks. Traditional deep learning approaches for texture synthesis are computationally expensive, often requiring minutes for low-resolution images. MGANs address this by precomputing a convolutional network that captures feature statistics of Markovian patches, enabling direct generation of outputs of arbitrary dimensions. This network can convert brown noise into realistic textures or photos into artistic paintings, achieving quality comparable to recent neural methods. The precomputed network eliminates the need for optimization during generation, significantly improving run-time performance (0.25M pixel images at 25Hz), surpassing previous methods by up to 500 times. The approach is applied to texture synthesis, style transfer, and video stylization.
The paper discusses the limitations of previous methods, such as deconvolutional networks, which are slow and require iterative back-propagation. MGANs use adversarial training to maintain image quality while improving speed. The model is trained using a discriminative network (D) and a generative network (G), with D trained to distinguish between real and synthesized patches. G decodes feature maps from VGG-19 to generate images. The model is evaluated on various tasks, including unguided and guided texture synthesis, showing high-quality results with significant speed improvements.
Experiments demonstrate that MGANs outperform previous methods in speed and quality, with results comparable to state-of-the-art methods. The model is efficient, with a precomputed feedforward network that allows rapid synthesis of textures and images. The paper also discusses the limitations of the approach, such as its performance on non-texture data and the need for semantic understanding in certain tasks. Overall, MGANs offer a fast and effective solution for texture synthesis and style transfer, with potential applications in artistic image and video stylization.