WIZARD OF WIKIPEDIA: KNOWLEDGE-POWERED CONVERSATIONAL AGENTS

WIZARD OF WIKIPEDIA: KNOWLEDGE-POWERED CONVERSATIONAL AGENTS

21 Feb 2019 | Emily Dinan*, Stephen Roller*, Kurt Shuster*, Angela Fan, Michael Auli, Jason Weston
The paper "OF WIKIPEDIA: KNOWLEDGE-POWERED CONVERSATIONAL AGENTS" by Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston from Facebook AI Research addresses the challenge of creating intelligent agents that can exhibit knowledge usage in open-domain dialogue. Traditional sequence-to-sequence models often rely on memorization rather than recalling knowledge, making it difficult to incorporate knowledge effectively. To overcome this, the authors collect and release a large dataset of conversations grounded with knowledge retrieved from Wikipedia, and design architectures capable of retrieving, reading, and conditioning on this knowledge to generate natural responses. The dataset, named "Wizard of Wikipedia," consists of 22,311 dialogues with 201,999 turns, covering a diverse range of topics. The authors develop two classes of models: retrieval models that produce responses from a set of candidate responses, and generative models that generate word-by-word responses. These models combine elements of Memory Network architectures for knowledge retrieval and Transformer architectures for text representation and sequence modeling. The paper evaluates the models using both automatic metrics and human evaluations, demonstrating their ability to conduct knowledgeable and engaging conversations on open-domain topics. The authors also introduce a new benchmark, ParlAI, to encourage further improvements in this research direction. The work highlights the importance of integrating knowledge into conversational agents and provides a comprehensive framework for future research.The paper "OF WIKIPEDIA: KNOWLEDGE-POWERED CONVERSATIONAL AGENTS" by Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston from Facebook AI Research addresses the challenge of creating intelligent agents that can exhibit knowledge usage in open-domain dialogue. Traditional sequence-to-sequence models often rely on memorization rather than recalling knowledge, making it difficult to incorporate knowledge effectively. To overcome this, the authors collect and release a large dataset of conversations grounded with knowledge retrieved from Wikipedia, and design architectures capable of retrieving, reading, and conditioning on this knowledge to generate natural responses. The dataset, named "Wizard of Wikipedia," consists of 22,311 dialogues with 201,999 turns, covering a diverse range of topics. The authors develop two classes of models: retrieval models that produce responses from a set of candidate responses, and generative models that generate word-by-word responses. These models combine elements of Memory Network architectures for knowledge retrieval and Transformer architectures for text representation and sequence modeling. The paper evaluates the models using both automatic metrics and human evaluations, demonstrating their ability to conduct knowledgeable and engaging conversations on open-domain topics. The authors also introduce a new benchmark, ParlAI, to encourage further improvements in this research direction. The work highlights the importance of integrating knowledge into conversational agents and provides a comprehensive framework for future research.
Reach us at info@study.space
[slides] Wizard of Wikipedia%3A Knowledge-Powered Conversational agents | StudySpace