Abstraction Processes in Artificial Grammar Learning

1997 ◽  
Vol 50 (1) ◽  
pp. 216-252 ◽  
Author(s):  
David R. Shanks ◽  
Theresa Johnstone ◽  
Leo Staggs

Four experiments explored the extent to which abstract knowledge may underlie subjects’ performance when asked to judge the grammaticality of letter strings generated from an artificial grammar. In Experiments 1 and 2 subjects studied grammatical strings instantiated with one set of letters and were then tested on grammatical and ungrammatical strings formed either from the same or a changed letter-set. Even with a change of letter-set, subjects were found to be sensitive to a variety of violations of the grammar. In Experiments 3 and 4, the critical manipulation involved the way in which the training strings were studied: an incidental learning procedure was used for some subjects, and others engaged in an explicit code-breaking task to try to learn the rules of the grammar. When strings were generated from a biconditional (Experiment 4) but not from a standard finite-state grammar (Experiment 3), grammaticality judgements for test strings were independent of their surface similarity to specific studied strings. Overall, the results suggest that transfer in this simple memory task is mediated at least to some extent by abstract knowledge.

2021 ◽  
pp. 1-28
Author(s):  
ANA PAULA SOARES ◽  
ROSA SILVA ◽  
FREDERICA FARIA ◽  
MARIA SILVA SANTOS ◽  
HELENA MENDES OLIVEIRA ◽  
...  

abstract Literacy affects many aspects of language and cognition, including the shift from a more holistic mode of processing to a more analytical part-based mode of processing. Here we examined whether this shift impacts the ability of preschool and primary school children to learn the rules underlying a finite-state grammar using an artificial grammar learning (AGL) paradigm implemented with either linguistic (letters) or non-linguistic (colors) materials to further examine if children’s AGL performance was modulated by type of stimuli. Both tasks involved a training phase in which half of the preschool children and half of the primary school children were exposed to a set of either letter or color strings without any information about the rules underlying the construction of those strings. Later, in the test phase, they were asked to decide whether a new set of letter or color strings conformed to those rules to test grammar learning. Results showed that only primary school children showed evidence of learning, and, importantly, only with colors. These findings seem to support the view that learning to read promotes reliance on smaller linguistic units that might hinder the ability of first-graders to learn the rules underlying finite-state grammars implemented with linguistic materials.


2002 ◽  
Vol 55 (2) ◽  
pp. 485-503 ◽  
Author(s):  
Pierre Perruchet ◽  
Annie Vinter ◽  
Chantal Pacteau ◽  
Jorge Gallego

A total of 78 adult participants were asked to read a sample of strings generated by a finite state grammar and, immediately after reading each string, to mark the natural segmentation positions with a slash bar. They repeated the same task after a phase of familiarization with the material, which consisted, depending on the group involved, of learning items by rote, performing a short term matching task, or searching for the rules of the grammar. Participants formed the same number of cognitive units before and after the training phase, thus indicating that they did not tend to form increasingly large units. However, the number of different units reliably decreased, whatever the task that participants had performed during familiarization. This result indicates that segmentation was increasingly consistent with the structure of the grammar. A theoretical account of this phenomenon, based on ubiquitous principles of associative memory and learning, is proposed. This account is supported by the ability of a computer model implementing those principles, PARSER, to reproduce the observed pattern of results. The implications of this study for developmental theories aimed at accounting for how children become able to parse sensory input into physically and linguistically relevant units are discussed.


2001 ◽  
Vol 13 (5) ◽  
pp. 648-669 ◽  
Author(s):  
Annette Kinder ◽  
David R. Shanks

A key claim of current theoretical analyses of the memory impairments associated with amnesia is that certain distinct forms of learning and memory are spared. Supporting this claim, B. J. Knowlton and L R. Squire found that amnesic patients and controls were indistinguishable in their ability to learn about and classify strings of letters generated from a finite-state grammar, but that the amnesics were impaired at recognizing the training strings. We show, first, that this pattern of results is predicted by a single-system connectionist model of artificial grammar learning (AGL) in which amnesia is simulated by a reduced learning rate. We then show in two experiments that a counterintuitive assumption of this model, that classification and recognition are functionally identical in AGL, is correct. In three further simulation studies, we demonstrate that the model also reproduces another type of dissociation, namely between recognition memory and repetition priming. We conclude that the performance of amnesic patients in memory tasks is better understood in terms of a nonselective, rather than a selective, memory deficit.


2006 ◽  
Vol 18 (11) ◽  
pp. 1829-1842 ◽  
Author(s):  
Jörg Bahlmann ◽  
Thomas C. Gunter ◽  
Angela D. Friederici

The present study investigated the processing of two types of artificial grammars by means of event-related brain potentials. Two categories of meaningless CV syllables were applied in each grammar type. The two grammars differed with regard to the type of the underlying rule. The finite-state grammar (FSG) followed the rule (AB)n, thereby generating local transitions between As and Bs (e.g., n = 2, ABAB). The phrase structure grammar (PSG) followed the rule AnBn, thereby generating center-embedded structures in which the first A and the last B embed the middle elements (e.g., n = 2, [A[AB]B]). Two sequence lengths (n = 2, n = 4) were used. Violations of the structures were introduced at different positions of the syllable sequences. Early violations were situated at the beginning of a sequence, and late violations were placed at the end of a sequence. A posteriorly distributed early negativity elicited by violations was present only in FSG. This effect was interpreted as the possible reflection of a violated local expectancy. Moreover, both grammar-type violations elicited a late positivity. This positivity varied as a function of the violation position in PSG, but not in FSG. These findings suggest that the late positivity could reflect difficulty of integration in PSG sequences.


1989 ◽  
Vol 1 (3) ◽  
pp. 372-381 ◽  
Author(s):  
Axel Cleeremans ◽  
David Servan-Schreiber ◽  
James L. McClelland

We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step t−1, together with element t, to predict element t + 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns on the hidden units come to correspond to the nodes of the grammar, although this correspondence is not necessary for the network to act as a perfect finite-state recognizer. We explore the conditions under which the network can carry information about distant sequential contingencies across intervening elements. Such information is maintained with relative ease if it is relevant at each intermediate step; it tends to be lost when intervening elements do not depend on it. At first glance this may suggest that such networks are not relevant to natural language, in which dependencies may span indefinite distances. However, embeddings in natural language are not completely independent of earlier information. The final simulation shows that long distance sequential contingencies can be encoded by the network even if only subtle statistical properties of embedded strings depend on the early information.


Sign in / Sign up

Export Citation Format

Share Document