November 1-5, 2016 | Duyu Tang, Bing Qin*, Ting Liu
The paper introduces a deep memory network for aspect-level sentiment classification, which explicitly captures the importance of each context word in inferring the sentiment polarity of an aspect. Unlike feature-based SVM and sequential neural models like LSTM, this approach uses multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets show that the proposed method performs comparably to state-of-the-art feature-based SVM systems and significantly outperforms LSTM and attention-based LSTM architectures. The deep memory network is also faster, with a 9-layer model being 15 times faster than an LSTM implementation on a CPU. The paper also explores different attention strategies and demonstrates that leveraging both content and location information can improve performance.The paper introduces a deep memory network for aspect-level sentiment classification, which explicitly captures the importance of each context word in inferring the sentiment polarity of an aspect. Unlike feature-based SVM and sequential neural models like LSTM, this approach uses multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets show that the proposed method performs comparably to state-of-the-art feature-based SVM systems and significantly outperforms LSTM and attention-based LSTM architectures. The deep memory network is also faster, with a 9-layer model being 15 times faster than an LSTM implementation on a CPU. The paper also explores different attention strategies and demonstrates that leveraging both content and location information can improve performance.