October 25-29, 2014, Doha, Qatar | Danqi Chen, Christopher D. Manning
This paper presents a novel dependency parser that uses neural networks to achieve both high accuracy and fast parsing speed. Traditional dependency parsers rely on millions of sparse indicator features, which are computationally expensive and often generalize poorly. The proposed parser learns a neural network classifier using a small number of dense features, significantly reducing the feature computation time. The classifier is integrated into a greedy, transition-based dependency parser, and it achieves an improvement of about 2% in both unlabeled and labeled attachment scores on English and Chinese datasets. The parser can process over 1000 sentences per second with a 92.2% unlabeled attachment score on the English Penn Treebank. The key contributions include the use of dense representations learned within the parsing task, a neural network architecture that balances accuracy and speed, and the introduction of a cube activation function to better capture interactions between features. Experimental results demonstrate the effectiveness of the proposed parser, showing superior performance compared to existing methods in terms of both accuracy and speed.This paper presents a novel dependency parser that uses neural networks to achieve both high accuracy and fast parsing speed. Traditional dependency parsers rely on millions of sparse indicator features, which are computationally expensive and often generalize poorly. The proposed parser learns a neural network classifier using a small number of dense features, significantly reducing the feature computation time. The classifier is integrated into a greedy, transition-based dependency parser, and it achieves an improvement of about 2% in both unlabeled and labeled attachment scores on English and Chinese datasets. The parser can process over 1000 sentences per second with a 92.2% unlabeled attachment score on the English Penn Treebank. The key contributions include the use of dense representations learned within the parsing task, a neural network architecture that balances accuracy and speed, and the introduction of a cube activation function to better capture interactions between features. Experimental results demonstrate the effectiveness of the proposed parser, showing superior performance compared to existing methods in terms of both accuracy and speed.