Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

15 Apr 2016 | Chuan Li and Michael Wand
This paper introduces Markovian Generative Adversarial Networks (MGANs), a method for efficient texture synthesis using deep neural networks. Unlike previous approaches that rely on numerical deconvolution, MGANs precompute a feed-forward, strided convolutional network to capture the feature statistics of Markovian patches, enabling direct generation of outputs of arbitrary dimensions. This approach significantly reduces computational costs, achieving run-time performance that is at least 500 times faster than previous neural texture synthesizers. MGANs are trained using adversarial training, maintaining high-quality results comparable to other recent methods. The paper explores applications in texture synthesis, style transfer, and video stylization, demonstrating the effectiveness of the proposed method in various scenarios.This paper introduces Markovian Generative Adversarial Networks (MGANs), a method for efficient texture synthesis using deep neural networks. Unlike previous approaches that rely on numerical deconvolution, MGANs precompute a feed-forward, strided convolutional network to capture the feature statistics of Markovian patches, enabling direct generation of outputs of arbitrary dimensions. This approach significantly reduces computational costs, achieving run-time performance that is at least 500 times faster than previous neural texture synthesizers. MGANs are trained using adversarial training, maintaining high-quality results comparable to other recent methods. The paper explores applications in texture synthesis, style transfer, and video stylization, demonstrating the effectiveness of the proposed method in various scenarios.
Reach us at info@study.space