The paper presents a new parser that achieves high precision and recall for Penn tree-bank style parse trees, outperforming previous state-of-the-art parsers. The parser uses a maximum-entropy-inspired model for conditioning and smoothing, which allows for the testing and combination of various conditioning events. Key contributions include:
1. **Probabilistic Generative Model**: The parser assigns probabilities to parses using a top-down process, considering constituent labels, pre-terminals, lexical heads, and expansions.
2. **Maximum-Entropy-Inspired Model**: This model factors the probability computation into a sequence of features, allowing for flexible and effective smoothing.
3. **Feature Schema and Smoothing**: The parser uses a set of features to condition on various aspects of the parse, such as labels, pre-terminals, and lexical heads. Standard deleted interpolation is used for smoothing.
4. **Experimental Results**: The parser achieves 90.1% average precision/recall for sentences of length ≤ 40 and 89.5% for sentences of length ≤ 100 on the Wall Street Journal treebank, representing a 13% error reduction over the best single-parser results.
5. **Contributions and Improvements**: The parser includes several improvements, such as guessing the pre-terminal before the lexical head, explicit marking of coordination, and using Markov grammars. These enhancements significantly improve performance, with a notable 2% improvement due to guessing the lexical head's pre-terminal.
The paper concludes by highlighting the overall error reduction and the importance of the maximum-entropy-inspired model for its flexibility and effectiveness in smoothing.The paper presents a new parser that achieves high precision and recall for Penn tree-bank style parse trees, outperforming previous state-of-the-art parsers. The parser uses a maximum-entropy-inspired model for conditioning and smoothing, which allows for the testing and combination of various conditioning events. Key contributions include:
1. **Probabilistic Generative Model**: The parser assigns probabilities to parses using a top-down process, considering constituent labels, pre-terminals, lexical heads, and expansions.
2. **Maximum-Entropy-Inspired Model**: This model factors the probability computation into a sequence of features, allowing for flexible and effective smoothing.
3. **Feature Schema and Smoothing**: The parser uses a set of features to condition on various aspects of the parse, such as labels, pre-terminals, and lexical heads. Standard deleted interpolation is used for smoothing.
4. **Experimental Results**: The parser achieves 90.1% average precision/recall for sentences of length ≤ 40 and 89.5% for sentences of length ≤ 100 on the Wall Street Journal treebank, representing a 13% error reduction over the best single-parser results.
5. **Contributions and Improvements**: The parser includes several improvements, such as guessing the pre-terminal before the lexical head, explicit marking of coordination, and using Markov grammars. These enhancements significantly improve performance, with a notable 2% improvement due to guessing the lexical head's pre-terminal.
The paper concludes by highlighting the overall error reduction and the importance of the maximum-entropy-inspired model for its flexibility and effectiveness in smoothing.