This article introduces three statistical models for natural language parsing, extending methods from probabilistic context-free grammars to lexicalized grammars. These models represent parse trees as sequences of decisions corresponding to a head-centered, top-down derivation of the tree. Independence assumptions lead to parameters that encode various linguistic features such as the X-bar schema, subcategorization, ordering of complements, placement of adjuncts, bigram lexical dependencies, wh-movement, and preferences for close attachment. The models are evaluated on the Penn Wall Street Journal Treebank, showing competitive accuracy with other models. The article also discusses refinements to the models, including handling nonrecursive NPs, coordination, punctuation, and sentences with empty subjects. Experiments and linguistic examples are provided to analyze the models' performance and characteristics.This article introduces three statistical models for natural language parsing, extending methods from probabilistic context-free grammars to lexicalized grammars. These models represent parse trees as sequences of decisions corresponding to a head-centered, top-down derivation of the tree. Independence assumptions lead to parameters that encode various linguistic features such as the X-bar schema, subcategorization, ordering of complements, placement of adjuncts, bigram lexical dependencies, wh-movement, and preferences for close attachment. The models are evaluated on the Penn Wall Street Journal Treebank, showing competitive accuracy with other models. The article also discusses refinements to the models, including handling nonrecursive NPs, coordination, punctuation, and sentences with empty subjects. Experiments and linguistic examples are provided to analyze the models' performance and characteristics.