Article contents
Convergence of error-driven ranking algorithms*
Published online by Cambridge University Press: 06 September 2012
Abstract
According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. Two classical examples are Tesar & Smolensky's (1998) Error-Driven Constraint Demotion (EDCD) and Boersma's (1998) Gradual Learning Algorithm (GLA). Yet EDCD only performs constraint demotion, and is thus shown to predict a ranking dynamics which is too simple from a modelling perspective. The GLA performs constraint promotion too, but has been shown not to converge. This paper develops a complete theory of convergence of error-driven ranking algorithms that perform both constraint demotion and promotion. In particular, it shows that convergent constraint promotion can be achieved (with an error-bound that compares well to that of EDCD) through a proper calibration of the amount by which constraints are promoted.
- Type
- Articles
- Information
- Copyright
- Copyright © Cambridge University Press 2012
References
REFERENCES
- 13
- Cited by