Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

26 Aug 2015 | Tsung-Hsien Wen, Milica Gašić, Nikola Mrkšić, Pei-Hao Su, David Vandyke and Steve Young
This paper presents a statistical language generator for spoken dialogue systems (SDS) based on a semantically controlled Long Short-term Memory (LSTM) structure. The proposed method, called SC-LSTM, can learn from unaligned data by jointly optimizing sentence planning and surface realization using a simple cross entropy training criterion. Language variation is achieved by sampling from output candidates. The authors evaluate the proposed method in two different test domains, showing improved performance compared to previous methods in terms of informativeness and naturalness, as measured by objective metrics and human judges. The SC-LSTM system is found to be preferred over other systems in both domains. The paper also discusses the architecture of the SC-LSTM, its deep structure, and the backward reranking process to improve fluency. The training and decoding details are provided, along with experimental results and a discussion of future work.This paper presents a statistical language generator for spoken dialogue systems (SDS) based on a semantically controlled Long Short-term Memory (LSTM) structure. The proposed method, called SC-LSTM, can learn from unaligned data by jointly optimizing sentence planning and surface realization using a simple cross entropy training criterion. Language variation is achieved by sampling from output candidates. The authors evaluate the proposed method in two different test domains, showing improved performance compared to previous methods in terms of informativeness and naturalness, as measured by objective metrics and human judges. The SC-LSTM system is found to be preferred over other systems in both domains. The paper also discusses the architecture of the SC-LSTM, its deep structure, and the backward reranking process to improve fluency. The training and decoding details are provided, along with experimental results and a discussion of future work.
Reach us at info@study.space
Understanding Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems