The Gradual Learning Algorithm (GLA) is a constraint-ranking algorithm for learning optimality-theoretic grammars. This paper evaluates the algorithm's capabilities, particularly in comparison with the Constraint Demotion algorithm. The GLA has several advantages: it can learn free variation, deal effectively with noisy data, and account for gradient well-formedness judgments. The paper examines case studies involving Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/.
The GLA operates on a continuous ranking scale and stochastic candidate evaluation. It uses probability distributions to generate variation in outputs. The algorithm is robust to speech errors and can develop formal analyses of linguistic phenomena involving intermediate well-formedness judgments. The GLA's continuous ranking scale allows for more conservative adjustments, leading to important advantages in handling optionality, robustness, and gradient well-formedness.
The paper presents empirical applications of the GLA, including a study of Ilokano phonology, which shows how the algorithm handles data involving systematic optionality. It also simulates the algorithm's response to speech errors. The GLA is tested against output frequencies, replicating Anttila's study on Finnish genitive plurals. The algorithm is also shown to replicate results on English /l/ derived with a hand-crafted grammar.
The GLA is compared with the Constraint Demotion algorithm. The GLA is more effective in handling free variation and is more robust to noisy data. The GLA's continuous ranking scale allows for more accurate modeling of linguistic phenomena. The paper concludes that the GLA is a workable solution to the research problem posed by Tesar and Smolensky. The GLA's ability to handle free variation and its robustness to noisy data make it a valuable tool for learning algorithms.The Gradual Learning Algorithm (GLA) is a constraint-ranking algorithm for learning optimality-theoretic grammars. This paper evaluates the algorithm's capabilities, particularly in comparison with the Constraint Demotion algorithm. The GLA has several advantages: it can learn free variation, deal effectively with noisy data, and account for gradient well-formedness judgments. The paper examines case studies involving Ilokano reduplication and metathesis, Finnish genitive plurals, and the distribution of English light and dark /l/.
The GLA operates on a continuous ranking scale and stochastic candidate evaluation. It uses probability distributions to generate variation in outputs. The algorithm is robust to speech errors and can develop formal analyses of linguistic phenomena involving intermediate well-formedness judgments. The GLA's continuous ranking scale allows for more conservative adjustments, leading to important advantages in handling optionality, robustness, and gradient well-formedness.
The paper presents empirical applications of the GLA, including a study of Ilokano phonology, which shows how the algorithm handles data involving systematic optionality. It also simulates the algorithm's response to speech errors. The GLA is tested against output frequencies, replicating Anttila's study on Finnish genitive plurals. The algorithm is also shown to replicate results on English /l/ derived with a hand-crafted grammar.
The GLA is compared with the Constraint Demotion algorithm. The GLA is more effective in handling free variation and is more robust to noisy data. The GLA's continuous ranking scale allows for more accurate modeling of linguistic phenomena. The paper concludes that the GLA is a workable solution to the research problem posed by Tesar and Smolensky. The GLA's ability to handle free variation and its robustness to noisy data make it a valuable tool for learning algorithms.