scholarly journals Structural priming in artificial languages and the regularisation of unpredictable variation

2016 ◽  
Vol 91 ◽  
pp. 158-180 ◽  
Author(s):  
Olga Fehér ◽  
Elizabeth Wonnacott ◽  
Kenny Smith
2018 ◽  
Vol 45 (5) ◽  
pp. 1054-1072 ◽  
Author(s):  
Jessica F. SCHWAB ◽  
Casey LEW-WILLIAMS ◽  
Adele E. GOLDBERG

AbstractChildren tend to regularize their productions when exposed to artificial languages, an advantageous response to unpredictable variation. But generalizations in natural languages are typically conditioned by factors that children ultimately learn. In two experiments, adult and six-year-old learners witnessed two novel classifiers, probabilistically conditioned by semantics. Whereas adults displayed high accuracy in their productions – applying the semantic criteria to familiar and novel items – children were oblivious to the semantic conditioning. Instead, children regularized their productions, over-relying on only one classifier. However, in a two-alternative forced-choice task, children's performance revealed greater respect for the system's complexity: they selected both classifiers equally, without bias toward one or the other, and displayed better accuracy on familiar items. Given that natural languages are conditioned by multiple factors that children successfully learn, we suggest that their tendency to simplify in production stems from retrieval difficulty when a complex system has not yet been fully learned.


2009 ◽  
Author(s):  
Padraig G. O'Seaghdha ◽  
Julio Santiago ◽  
Antonio Roman ◽  
Jordan L. Knicely

2020 ◽  
Author(s):  
Laetitia Zmuda ◽  
Charlotte Baey ◽  
Paolo Mairano ◽  
Anahita Basirat

It is well-known that individuals can identify novel words in a stream of an artificial language using statistical dependencies. While underlying computations are thought to be similar from one stream to another (e.g. transitional probabilities between syllables), performance are not similar. According to the “linguistic entrenchment” hypothesis, this would be due to the fact that individuals have some prior knowledge regarding co-occurrences of elements in speech which intervene during verbal statistical learning. The focus of previous studies was on task performance. The goal of the current study is to examine the extent to which prior knowledge impacts metacognition (i.e. ability to evaluate one’s own cognitive processes). Participants were exposed to two different artificial languages. Using a fully Bayesian approach, we estimated an unbiased measure of metacognitive efficiency and compared the two languages in terms of task performance and metacognition. While task performance was higher in one of the languages, the metacognitive efficiency was similar in both languages. In addition, a model assuming no correlation between the two languages better accounted for our results compared to a model where correlations were introduced. We discuss the implications of our findings regarding the computations which underlie the interaction between input and prior knowledge during verbal statistical learning.


2021 ◽  
Vol 119 ◽  
pp. 104220
Author(s):  
Chi Zhang ◽  
Sarah Bernolet ◽  
Robert J. Hartsuiker
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document