Perceptual flexibility: Maintenance or recovery of the ability to discriminate non-native speech sounds.

1984 ◽  
Vol 38 (4) ◽  
pp. 579-590 ◽  
Author(s):  
Richard C. Tees ◽  
Janet F. Werker
Keyword(s):  
2016 ◽  
Vol 139 (4) ◽  
pp. 2162-2162
Author(s):  
Pamela Fuhrmeister ◽  
F. Sayako Earle ◽  
Jay Rueckl ◽  
Emily Myers

2020 ◽  
Vol 147 (3) ◽  
pp. EL289-EL294
Author(s):  
Pamela Fuhrmeister ◽  
Garrett Smith ◽  
Emily B. Myers
Keyword(s):  

2020 ◽  
Author(s):  
Christopher Martin Mikkelsen Cox ◽  
Tamar Keren-Portnoy ◽  
Andreas Roepstorff ◽  
Riccardo Fusaroli

This paper investigates the extent to which infants can integrate synchronous speech information across different modalities. A meta-analysis of 24 studies reporting 92 separate effect size measures suggests that infants possess a robust ability to perceive audio-visual congruence for speech sounds. Applying a hierarchical Bayesian robust regression model to the data indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). Moderator analyses suggest that infants’ audio-visual matching ability for speech sounds emerges at an early point in process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.


2020 ◽  
Vol 1 (3) ◽  
pp. 339-364
Author(s):  
David I. Saltzman ◽  
Emily B. Myers

The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the “dorsal stream”) to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.


2008 ◽  
Vol 79 ◽  
pp. 21-29
Author(s):  
Desiree Capel ◽  
Elise de Bree ◽  
Annemarie Kerkhoff ◽  
Frank Wijnen

Phonemes are perceived categorically and this perception is language-specific for adult listeners. Infants initially are "universal" listeners, capable of discriminating both native and non-native speech contrasts. This ability disappears in the first year of life. Maye et al. (Cognition (2002)) propose that statistical learning is responsible for this change to language-specific perception. They were the first to show that infants of 6 and 8 months old use statistical distribution of phonetic variation in learning to discriminate speech sounds. A replication of this experiment studied 10-11-month-old Dutch infants. They were exposed to either a bimodal or a unimodal frequency distribution of an 8-step speech sound continuum based on the Hindi voiced and voiceless retroflex plosives (/da/ en /ta/). The results show that only infants in the bimodal condition could discriminate the contrast, representing the speech sounds in two categories rather than one.


2013 ◽  
Vol 29 (4) ◽  
pp. 391-411 ◽  
Author(s):  
Kristen Swan ◽  
Emily Myers

Adults tend to perceive speech sounds from their native language as members of distinct and stable categories; however, they fail to perceive differences between many non-native speech sounds without a great deal of training. The present study investigates the effects of categorization training on adults’ ability to discriminate non-native phonetic contrasts. It was hypothesized that only individuals who successfully learned the appropriate categories would show selective improvements in discriminating between-category contrasts. Participants were trained to categorize progressively narrow phonetic contrasts across one of two non-native boundaries, with discrimination pre- and post-tests completed to measure the effects of training on participants’ perceptual sensitivity. Results suggest that changes in adults’ ability to discriminate a non-native contrast depend on their successful learning of the relevant category structure. Furthermore, post-training identification functions show that changes in perceptual categories specifically correspond to their relative placement of the category boundary. Taken together, these results indicate that learning to assign category labels to a non-native speech continuum is sufficient to induce discontinuous perception of between- versus within-category contrasts.


2016 ◽  
Vol 66 (S2) ◽  
pp. 155-186 ◽  
Author(s):  
Natalia Kartushina ◽  
Ulrich H. Frauenfelder ◽  
Narly Golestani

2011 ◽  
Vol 23 (12) ◽  
pp. 4038-4047 ◽  
Author(s):  
Zarinah K. Agnew ◽  
Carolyn McGettigan ◽  
Sophie K. Scott

Several perspectives on speech perception posit a central role for the representation of articulations in speech comprehension, supported by evidence for premotor activation when participants listen to speech. However, no experiments have directly tested whether motor responses mirror the profile of selective auditory cortical responses to native speech sounds or whether motor and auditory areas respond in different ways to sounds. We used fMRI to investigate cortical responses to speech and nonspeech mouth (ingressive click) sounds. Speech sounds activated bilateral superior temporal gyri more than other sounds, a profile not seen in motor and premotor cortices. These results suggest that there are qualitative differences in the ways that temporal and motor areas are activated by speech and click sounds: Anterior temporal lobe areas are sensitive to the acoustic or phonetic properties, whereas motor responses may show more generalized responses to the acoustic stimuli.


Sign in / Sign up

Export Citation Format

Share Document