13 Sep 2018 | Zhiting Hu 1 2 Zichao Yang 1 Xiaodan Liang 1 2 Ruslan Salakhutdinov 1 Eric P. Xing 1 2
This paper proposes a new neural generative model for controlled text generation, combining variational autoencoders (VAEs) with holistic attribute discriminators to enable effective imposition of semantic structures. The model enhances VAEs with a wake-sleep algorithm, allowing the use of generated samples as extra training data. It addresses challenges in text generation, including the discrete nature of text samples and the need for disentangled latent representations. The model learns interpretable representations from word annotations and generates sentences with desired attributes like sentiment and tense. It uses differentiable softmax approximation to handle discrete text samples and enables explicit constraints on attribute control. The model also incorporates semi-supervised learning and efficient collaborative training between generators and discriminators. Quantitative experiments show improved accuracy in generating sentences with controlled attributes. The model can be interpreted as enhancing VAEs with an extended wake-sleep procedure, enabling efficient mutual bootstrapping. It demonstrates the effectiveness of using discriminators for learning signals, allowing independent training of discriminators for different attributes. The model is shown to learn highly disentangled representations from word-level labels and produce plausible short sentences. The paper also discusses related work, including deep generative modeling, and presents experiments on sentiment and tense generation, demonstrating the model's effectiveness in generating sentences with controlled attributes. The model is trained on labeled datasets and shows improved performance compared to previous generative models. The paper concludes with discussions on the model's interpretability, semi-supervised learning, and potential applications in natural language generation.This paper proposes a new neural generative model for controlled text generation, combining variational autoencoders (VAEs) with holistic attribute discriminators to enable effective imposition of semantic structures. The model enhances VAEs with a wake-sleep algorithm, allowing the use of generated samples as extra training data. It addresses challenges in text generation, including the discrete nature of text samples and the need for disentangled latent representations. The model learns interpretable representations from word annotations and generates sentences with desired attributes like sentiment and tense. It uses differentiable softmax approximation to handle discrete text samples and enables explicit constraints on attribute control. The model also incorporates semi-supervised learning and efficient collaborative training between generators and discriminators. Quantitative experiments show improved accuracy in generating sentences with controlled attributes. The model can be interpreted as enhancing VAEs with an extended wake-sleep procedure, enabling efficient mutual bootstrapping. It demonstrates the effectiveness of using discriminators for learning signals, allowing independent training of discriminators for different attributes. The model is shown to learn highly disentangled representations from word-level labels and produce plausible short sentences. The paper also discusses related work, including deep generative modeling, and presents experiments on sentiment and tense generation, demonstrating the model's effectiveness in generating sentences with controlled attributes. The model is trained on labeled datasets and shows improved performance compared to previous generative models. The paper concludes with discussions on the model's interpretability, semi-supervised learning, and potential applications in natural language generation.