gradual learning algorithm
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

2020 ◽  
Vol 51 (1) ◽  
pp. 97-123
Author(s):  
Giorgio Magri ◽  
Benjamin Storme

The Calibrated Error-Driven Ranking Algorithm (CEDRA; Magri 2012 ) is shown to fail on two test cases of phonologically conditioned variation from Boersma and Hayes 2001 . The failure of the CEDRA raises a serious unsolved challenge for learnability research in stochastic Optimality Theory, because the CEDRA itself was proposed to repair a learnability problem ( Pater 2008 ) encountered by the original Gradual Learning Algorithm. This result is supported by both simulation results and a detailed analysis whereby a few constraints and a few candidates at a time are recursively “peeled off” until we are left with a “core” small enough that the behavior of the learner is easy to interpret.


Author(s):  
Karen Jesney

Many error-driven learning algorithms for constraint-based phonological grammars, including the Gradual Learning Algorithm for Optimality Theory and Harmonic Grammar, predict that more frequent input forms will be acquired earlier than less frequent input forms – a fact that has been commonly taken as a virtue of these models. These models also predict, however, that the rate of learning for more frequent input forms should be faster than the rate of learning for less frequent input forms. In other words, these models predict that sequence and rate of acquisition are related; structures acquired earlier in the course of learning will be acquired more rapidly, while those that are acquired relatively later will be acquired more slowly. This paper explicates these predictions and argues that they are not consistently supported by child language data.  Evidence from six children’s acquisition of consonant clusters is presented, demonstrating that, contrary to the predictions of the learning models, learning sequence and rate of acquisition are largely disassociated.


2013 ◽  
Vol 44 (4) ◽  
pp. 569-609 ◽  
Author(s):  
Giorgio Magri

Various authors have recently endorsed Harmonic Grammar (HG) as a replacement for Optimality Theory (OT). One argument for this move is that OT seems not to have close correspondents within machine learning while HG allows methods and results from machine learning to be imported into computational phonology. Here, I prove that this argument in favor of HG and against OT is wrong. In fact, I show that any algorithm for HG can be turned into an algorithm for OT. Hence, HG has no computational advantages over OT. This result allows tools from machine learning to be systematically adapted to OT. As an illustration of this new toolkit for computational OT, I prove convergence for a slight variant of Boersma’s (1998) (nonstochastic) Gradual Learning Algorithm.


Phonology ◽  
2012 ◽  
Vol 29 (2) ◽  
pp. 213-269 ◽  
Author(s):  
Giorgio Magri

AbstractAccording to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. Two classical examples are Tesar & Smolensky's (1998) Error-Driven Constraint Demotion (EDCD) and Boersma's (1998) Gradual Learning Algorithm (GLA). Yet EDCD only performs constraint demotion, and is thus shown to predict a ranking dynamics which is too simple from a modelling perspective. The GLA performs constraint promotion too, but has been shown not to converge. This paper develops a complete theory of convergence of error-driven ranking algorithms that perform both constraint demotion and promotion. In particular, it shows that convergent constraint promotion can be achieved (with an error-bound that compares well to that of EDCD) through a proper calibration of the amount by which constraints are promoted.


2009 ◽  
Vol 40 (4) ◽  
pp. 667-686 ◽  
Author(s):  
Paul Boersma

This article shows that Error-Driven Constraint Demotion (EDCD), an error-driven learning algorithm proposed by Tesar (1995) for Prince and Smolensky's (1993/2004) version of Optimality Theory, can fail to converge to a correct totally ranked hierarchy of constraints, unlike the earlier non-error-driven learning algorithms proposed by Tesar and Smolensky (1993). The cause of the problem is found in Tesar's use of “mark-pooling ties,” indicating that EDCD can be repaired by assuming Anttila's (1997) “permuting ties” instead. Proofs show, and simulations confirm, that totally ranked hierarchies can indeed be found by both this repaired version of EDCD and Boersma's (1998) Minimal Gradual Learning Algorithm.


2005 ◽  
Vol 38 ◽  
pp. 187
Author(s):  
Jason Mattausch

The purpose of this dissertation is to defend the idea that the empirical responsibilities of binding theory can be handled in a more psychologically and historically realistic way when assigned to the field of pragmatics. In particular, I wish to show that Optimality Theory (OT) (Prince & Smolensky, 1993), the stochastic OT and Gradual Learning Algorithm of Boersma (1998), the Recoverability of OT of Wilson (2001) and Buchwald et al. (2002), and the bidirectional OT of Blutner (2000b) and Bidirectional Gradual Learning Algorithm of Jäger (2003a) can all participate in a formal framework in which one can formally spell out and justify the idea that the distributional behavior of bound pronouns and reflexivs is a pragmatic phenomenon.  


Sign in / Sign up

Export Citation Format

Share Document