12-14 July 2012 | Richard Socher, Brody Huval, Christopher D. Manning, Andrew Y. Ng
This paper introduces a recursive matrix-vector space (MV-RNN) model for semantic compositionality. The model assigns a vector and a matrix to each node in a parse tree, where the vector captures the meaning of the constituent and the matrix captures how it modifies the meaning of neighboring words or phrases. The MV-RNN can learn the meaning of operators in propositional logic and natural language. The model achieves state-of-the-art performance on three tasks: predicting fine-grained sentiment distributions of adverb-adjective pairs, classifying sentiment labels of movie reviews, and classifying semantic relationships between nouns using the syntactic path between them.
The MV-RNN combines the strengths of both linear and nonlinear composition models by assigning a vector and a matrix to every word and learning an input-specific, nonlinear, compositional function for computing vector and matrix representations for multi-word sequences. The model uses a neural network as the final merging function, allowing it to capture semantic compositionality in a syntactically plausible way. The model outperforms previous state-of-the-art models on full sentence sentiment prediction of movie reviews and can also be used to find relationships between words using the learned phrase vectors.
The MV-RNN is trained using a softmax classifier to predict a class distribution over sentiment or relationship classes. The model's performance is evaluated on several tasks, including sentiment analysis and semantic relationship classification. The model outperforms all previous approaches on the SemEval-2010 Task 8 competition without using any hand-designed semantic resources such as WordNet or FrameNet. By adding WordNet hypernyms, POS and NER tags, the model outperforms the state of the art that uses significantly more resources.
The MV-RNN is shown to be able to learn propositional logic operators such as and, or, and not from a few examples. The model is also able to handle complex logical functions by combining these operators recursively. The model is tested on a standard benchmark dataset of movie reviews and achieves the highest performance on full length sentences. The model is also able to classify semantic relationships between nouns in a sentence, showing its ability to learn compositional meaning representations for multi-word phrases or full sentences. The model's performance is evaluated on several tasks, including sentiment analysis and semantic relationship classification, and it outperforms all previous approaches on the SemEval-2010 Task 8 competition.This paper introduces a recursive matrix-vector space (MV-RNN) model for semantic compositionality. The model assigns a vector and a matrix to each node in a parse tree, where the vector captures the meaning of the constituent and the matrix captures how it modifies the meaning of neighboring words or phrases. The MV-RNN can learn the meaning of operators in propositional logic and natural language. The model achieves state-of-the-art performance on three tasks: predicting fine-grained sentiment distributions of adverb-adjective pairs, classifying sentiment labels of movie reviews, and classifying semantic relationships between nouns using the syntactic path between them.
The MV-RNN combines the strengths of both linear and nonlinear composition models by assigning a vector and a matrix to every word and learning an input-specific, nonlinear, compositional function for computing vector and matrix representations for multi-word sequences. The model uses a neural network as the final merging function, allowing it to capture semantic compositionality in a syntactically plausible way. The model outperforms previous state-of-the-art models on full sentence sentiment prediction of movie reviews and can also be used to find relationships between words using the learned phrase vectors.
The MV-RNN is trained using a softmax classifier to predict a class distribution over sentiment or relationship classes. The model's performance is evaluated on several tasks, including sentiment analysis and semantic relationship classification. The model outperforms all previous approaches on the SemEval-2010 Task 8 competition without using any hand-designed semantic resources such as WordNet or FrameNet. By adding WordNet hypernyms, POS and NER tags, the model outperforms the state of the art that uses significantly more resources.
The MV-RNN is shown to be able to learn propositional logic operators such as and, or, and not from a few examples. The model is also able to handle complex logical functions by combining these operators recursively. The model is tested on a standard benchmark dataset of movie reviews and achieves the highest performance on full length sentences. The model is also able to classify semantic relationships between nouns in a sentence, showing its ability to learn compositional meaning representations for multi-word phrases or full sentences. The model's performance is evaluated on several tasks, including sentiment analysis and semantic relationship classification, and it outperforms all previous approaches on the SemEval-2010 Task 8 competition.