3 Mar 2020 | Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, Rosanne Liu
The paper introduces the Plug and Play Language Model (PPLM), a method for controllable text generation that combines a pre-trained language model with one or more simple attribute classifiers. PPLM allows for fine-grained control over generated text attributes, such as topic and sentiment, without modifying the model architecture or fine-tuning on attribute-specific data. The approach uses gradients from the attribute model to guide the generation process, ensuring that the generated text aligns with the desired attributes while maintaining fluency. The authors demonstrate the effectiveness of PPLM through various experiments, including controlled generation on topics and sentiment, language detoxification, and controlled story writing. PPLM shows comparable or better performance compared to existing baselines and offers a flexible and efficient way to control text generation.The paper introduces the Plug and Play Language Model (PPLM), a method for controllable text generation that combines a pre-trained language model with one or more simple attribute classifiers. PPLM allows for fine-grained control over generated text attributes, such as topic and sentiment, without modifying the model architecture or fine-tuning on attribute-specific data. The approach uses gradients from the attribute model to guide the generation process, ensuring that the generated text aligns with the desired attributes while maintaining fluency. The authors demonstrate the effectiveness of PPLM through various experiments, including controlled generation on topics and sentiment, language detoxification, and controlled story writing. PPLM shows comparable or better performance compared to existing baselines and offers a flexible and efficient way to control text generation.