scholarly journals Artificial grammar learning meets formal language theory: an overview

2012 ◽  
Vol 367 (1598) ◽  
pp. 1933-1955 ◽  
Author(s):  
W. Tecumseh Fitch ◽  
Angela D. Friederici

Formal language theory (FLT), part of the broader mathematical theory of computation, provides a systematic terminology and set of conventions for describing rules and the structures they generate, along with a rich body of discoveries and theorems concerning generative rule systems. Despite its name, FLT is not limited to human language, but is equally applicable to computer programs, music, visual patterns, animal vocalizations, RNA structure and even dance. In the last decade, this theory has been profitably used to frame hypotheses and to design brain imaging and animal-learning experiments, mostly using the ‘artificial grammar-learning’ paradigm. We offer a brief, non-technical introduction to FLT and then a more detailed analysis of empirical research based on this theory. We suggest that progress has been hampered by a pervasive conflation of distinct issues, including hierarchy, dependency, complexity and recursion. We offer clarifications of several relevant hypotheses and the experimental designs necessary to test them. We finally review the recent brain imaging literature, using formal languages, identifying areas of convergence and outstanding debates. We conclude that FLT has much to offer scientists who are interested in rigorous empirical investigations of human cognition from a neuroscientific and comparative perspective.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Andrea Alamia ◽  
Victor Gauducheau ◽  
Dimitri Paisios ◽  
Rufin VanRullen

AbstractIn recent years artificial neural networks achieved performance close to or better than humans in several domains: tasks that were previously human prerogatives, such as language processing, have witnessed remarkable improvements in state of the art models. One advantage of this technological boost is to facilitate comparison between different neural networks and human performance, in order to deepen our understanding of human cognition. Here, we investigate which neural network architecture (feedforward vs. recurrent) matches human behavior in artificial grammar learning, a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.


1997 ◽  
Vol 50 (1) ◽  
pp. 216-252 ◽  
Author(s):  
David R. Shanks ◽  
Theresa Johnstone ◽  
Leo Staggs

2015 ◽  
Vol 80 (2) ◽  
pp. 195-211 ◽  
Author(s):  
Randall K. Jamieson ◽  
Uliana Nevzorova ◽  
Graham Lee ◽  
D. J. K. Mewhort

Sign in / Sign up

Export Citation Format

Share Document