scholarly journals Abstraction-based Efficiency in the Lexicon

Author(s):  
Anne Cutler

AbstractListeners learn from their past experience of listening to spoken words, and use this learning to maximise the efficiency of future word recognition. This paper summarises evidence that the facilitatory effects of drawing on past experience are mediated by abstraction, enabling learning to be generalised across new words and new listening situations. Phoneme category retuning, which allows adaptation to speaker-specific articulatory characteristics, is generalised on the basis of relatively brief experience to words previously unheard from that speaker. Abstract knowledge of prosodic regularities is applied to recognition even of novel words for which these regularities were violated. Prosodic word-boundary regularities drive segmentation of speech into words independently of the membership of the lexical candidate set resulting from the segmentation operation. Each of these different cases illustrates how abstraction from past listening experience has contributed to the efficiency of lexical recognition.

1980 ◽  
Vol 12 (2) ◽  
pp. 97-103
Author(s):  
John D. Mcneil ◽  
Lisbeth Donant

The present study was an investigation of the transfer effect from training in three word recognition strategies. Results are based upon a test of ability to decode unknown words classified as graphophonic, structural, and contextual. Ninety second, third, and fourth grade children and an equal number of children in a non-instructional control group provided the data. Significant results are discussed in light of the effectiveness of the training in helping pupils decode new words and the specific rather than the general values of each strategy. The data support the efficacy of children learning multiple word recognition strategies for decoding purposes. The study does not, however, treat the relation of word recognition strategies to comprehension of text.


2018 ◽  
Vol 61 (1) ◽  
pp. 145-158 ◽  
Author(s):  
Chhayakanta Patro ◽  
Lisa Lucks Mendel

PurposeThe main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs.MethodListeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words.ResultsThe results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH.ConclusionWord recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.


2020 ◽  
pp. 026765832096825
Author(s):  
Jeong-Im Han ◽  
Song Yi Kim

The present study investigated the influence of orthographic input on the recognition of second language (L2) spoken words with phonological variants, when first language (L1) and L2 have different orthographic structures. Lexical encoding for intermediate-to-advanced level Mandarin learners of Korean was assessed using masked cross-modal and within-modal priming tasks. Given that Korean has obstruent nasalization in the syllable coda, prime target pairs were created with and without such phonological variants, but spellings that were provided in the cross-modal task reflected their unaltered, nonnasalized forms. The results indicate that when L2 learners are exposed to transparent alphabetic orthography, they do not show a particular cost for spoken word recognition of L2 phonological variants as long as the variation is regular and rule-governed.


1994 ◽  
Vol 5 (1) ◽  
pp. 42-46 ◽  
Author(s):  
Lynne C Nygaard ◽  
Mitchell S Sommers ◽  
David B Pisoni

To determine how familiarity with a talker's voice affects perception of spoken words, we trained two groups of subjects to recognize a set of voices over a 9-day period One group then identified novel words produced by the same set of talkers at four signal-to-noise ratios Control subjects identified the same words produced by a different set of talkers The results showed that the ability to identify a talker's voice improved intelligibility of novel words produced by that talker The results suggest that speech perception may involve talker-contingent processes whereby perceptual learning of aspects of the vocal source facilitates the subsequent phonetic analysis of the acoustic signal


2014 ◽  
Vol 23 (2) ◽  
pp. 120-133 ◽  
Author(s):  
Kathryn W. Brady ◽  
Judith C. Goodman

Purpose The authors of this study examined whether the type and number of word-learning cues affect how children infer and retain word-meaning mappings and whether the use of these cues changes with age. Method Forty-eight 18- to 36-month-old children with typical language participated in a fast-mapping task in which 6 novel words were presented with 3 types of cues to the words' referents, either singly or in pairs. One day later, children were tested for retention of the novel words. Results By 24 months of age, children correctly inferred the referents of the novel words at a significant level. Children retained the meanings of words at a significant rate by 30 months of age. Children retained the first 3 of the 6 word-meaning mappings by 24 months of age. For both fast mapping and retention, the efficacy of different cue types changed with development, but children were equally successful whether the novel words were presented with 1 or 2 cues. Conclusion The type of information available to children at fast mapping affects their ability to both form and retain word-meaning associations. Providing children with more information in the form of paired cues had no effect on either fast mapping or retention.


2005 ◽  
Vol 58 (2) ◽  
pp. 251-273 ◽  
Author(s):  
Wilma van Donselaar ◽  
Mariëtte Koster ◽  
Anne Cutler

Three cross-modal priming experiments examined the role of suprasegmental information in the processing of spoken words. All primes consisted of truncated spoken Dutch words. Recognition of visually presented word targets was facilitated by prior auditory presentation of the first two syllables of the same words as primes, but only if they were appropriately stressed (e.g., OKTOBER preceded by okTO-); inappropriate stress, compatible with another word (e.g., OKTOBER preceded by OCto-, the beginning of octopus), produced inhibition. Monosyllabic fragments (e.g., OC-) also produced facilitation when appropriately stressed; if inappropriately stressed, they produced neither facilitation nor inhibition. The bisyllabic fragments that were compatible with only one word produced facilitation to semantically associated words, but inappropriate stress caused no inhibition of associates. The results are explained within a model of spoken-word recognition involving competition between simultaneously activated phonological representations followed by activation of separate conceptual representations for strongly supported lexical candidates; at the level of the phonological representations, activation is modulated by both segmental and suprasegmental information.


2020 ◽  
Vol 7 (2) ◽  
pp. 45
Author(s):  
Mary Susan Anyiendah ◽  
Paul A. Odundo ◽  
Agnes Kibuyi

Word recognition is one of the comprehension processing skills encapsulated by the interactive approach instruction. Word recognition skills enable readers to understand the meaning of comprehension passages by decoding the sound of new words. Learners in Vihiga County perform poorer in English language examinations than their peers in neighbouring counties. The performance is weaker in comprehension than in grammar sections of the English paper. Despite this, there is paucity of empirical information about the nexus between activation of word recognition skills and learners’ achievement in reading comprehension in the County. This study applied the Solomon Four-Group Design to source data from 279 primary school learners and 8 teachers in 2017. Multiple linear regression was used to generate two models, one for the experimental group (Model 1) and one for the control group (Model 2). Key results show that the influence of word recognition skills on learners’ achievement in reading comprehension was statistically significant in both groups. However, the effect was stronger in the experimental than in the control group, suggests that training teachers in the experimental group enabled learners in that group to perform better than their colleagues in the control group. Thus, activation of learners’ word recognition skills is likely to improve achievement in reading comprehension.


Author(s):  
Krista Byers-Heinlein ◽  
Amel Jardak ◽  
Eva Fourakis ◽  
Casey Lew-Williams

Abstract Language mixing is common in bilingual children's learning environments. Here, we investigated effects of language mixing on children's learning of new words. We tested two groups of 3-year-old bilinguals: French–English (Experiment 1) and Spanish–English (Experiment 2). Children were taught two novel words, one in single-language sentences (“Look! Do you see the dog on the teelo?”) and one in mixed-language sentences with a mid-sentence language switch (“Look! Do you see the chien/perro on the walem?”). During the learning phase, children correctly identified novel targets when hearing both single-language and mixed-language sentences. However, at test, French–English bilinguals did not successfully recognize the word encountered in mixed-language sentences. Spanish–English bilinguals failed to recognize either word, which underscores the importance of examining multiple bilingual populations. This research suggests that language mixing may sometimes hinder children's encoding of novel words that occur downstream, but leaves open several possible underlying mechanisms.


Sign in / Sign up

Export Citation Format

Share Document