MEMORY NETWORKS

MEMORY NETWORKS

29 Nov 2015 | Jason Weston, Sumit Chopra & Antoine Bordes
The paper introduces a new class of learning models called *memory networks*, which combine inference components with a long-term memory component. The long-term memory can be read and written to, and the goal is to use it for prediction. The authors investigate these models in the context of question answering (QA), where the long-term memory acts as a dynamic knowledge base, and the output is a textual response. They evaluate the models on a large-scale QA task and a smaller, more complex toy task generated from a simulated world. In the latter, they demonstrate the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs. The paper also discusses related work, presents a specific implementation of memory networks for text, and describes experiments and results.The paper introduces a new class of learning models called *memory networks*, which combine inference components with a long-term memory component. The long-term memory can be read and written to, and the goal is to use it for prediction. The authors investigate these models in the context of question answering (QA), where the long-term memory acts as a dynamic knowledge base, and the output is a textual response. They evaluate the models on a large-scale QA task and a smaller, more complex toy task generated from a simulated world. In the latter, they demonstrate the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs. The paper also discusses related work, presents a specific implementation of memory networks for text, and describes experiments and results.
Reach us at info@study.space
Understanding Memory Networks