phoneme categorization
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 2)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
Hadeer Derawi ◽  
Eva Reinisch ◽  
Yafit Gabay

AbstractSpeech recognition is a complex human behavior in the course of which listeners must integrate the detailed phonetic information present in the acoustic signal with their general linguistic knowledge. It is commonly assumed that this process occurs effortlessly for most people, but it is still unclear whether this also holds true in the case of developmental dyslexia (DD), a condition characterized by perceptual deficits. In the present study, we used a dual-task setting to test the assumption that speech recognition is effortful for people with DD. In particular, we tested the Ganong effect (i.e., lexical bias on phoneme identification) while participants performed a secondary task of either low or high cognitive demand. We presumed that reduced efficiency in perceptual processing in DD would manifest in greater modulation in the performance of primary task by cognitive load. Results revealed that this was indeed the case. We found a larger Ganong effect in the DD group under high than under low cognitive load, and this modulation was larger than it was for typically developed (TD) readers. Furthermore, phoneme categorization was less precise in the DD group than in the TD group. These findings suggest that individuals with DD show increased reliance on top-down lexically mediated perception processes, possibly as a compensatory mechanism for reduced efficiency in bottom-up use of acoustic cues. This indicates an imbalance between bottom-up and top-down processes in speech recognition of individuals with DD.


2021 ◽  
pp. 095679762096878
Author(s):  
Spencer Caplan ◽  
Alon Hafri ◽  
John C. Trueswell

What happens to an acoustic signal after it enters the mind of a listener? Previous work has demonstrated that listeners maintain intermediate representations over time. However, the internal structure of such representations—be they the acoustic-phonetic signal or more general information about the probability of possible categories—remains underspecified. We present two experiments using a novel speaker-adaptation paradigm aimed at uncovering the format of speech representations. We exposed adult listeners ( N = 297) to a speaker whose utterances contained acoustically ambiguous information concerning phones (and thus words), and we manipulated the temporal availability of disambiguating cues via visually presented text (presented before or after each utterance). Results from a traditional phoneme-categorization task showed that listeners adapted to a modified acoustic distribution when disambiguating text was provided before but not after the audio. These results support the position that speech representations consist of activation over categories and are inconsistent with direct maintenance of the acoustic-phonetic signal.


2020 ◽  
Author(s):  
Spencer Caplan ◽  
Alon Hafri ◽  
John Trueswell

(Accepted and In Press at Psychological Science)What happens to the acoustic signal after it enters the mind of a listener? Previous work demonstrates that listeners maintain intermediate representations over time. However, the internal structure of such representations—be they the acoustic-phonetic signal or more general information about the probability of possible categories—remains underspecified. We present two experiments using a novel speaker adaptation paradigm aimed at uncovering the format of speech representations. We exposed adult listeners (N=297) to a speaker whose utterances contained acoustically ambiguous information concerning phones/words and manipulated the temporal availability of disambiguating cues via visually presented text (i.e., presentation before or after each utterance). Results from a traditional phoneme categorization task showed that listeners adapt to a modified acoustic distribution when disambiguating text is provided before the audio, but not after. Results support the position that speech representations consist of activation over categories and are inconsistent with direct maintenance of the acoustic-phonetic signal.


2020 ◽  
Vol 148 (4) ◽  
pp. 2209-2222
Author(s):  
Gabrielle E. O'Brien ◽  
Liesbeth Gijbels ◽  
Jason D. Yeatman

2020 ◽  
Author(s):  
Spencer Caplan ◽  
Alon Hafri ◽  
John Trueswell

What happens to the acoustic signal after it enters the mind of a listener? Previous work demonstrates that listeners maintain intermediate representations over time. However, the internal structure of such representations—be they the acoustic-phonetic signal or more general information about the probability of possible categories—remains underspecified. We present two experiments using a novel speaker adaptation paradigm aimed at uncovering the format of speech representations. We exposed adult listeners (N=297) to a speaker whose utterances contained acoustically ambiguous information concerning phones/words and manipulated the temporal availability of disambiguating cues via visually presented text (i.e., presentation before or after each utterance). Results from a traditional phoneme categorization task showed that listeners adapt to a modified acoustic distribution when disambiguating text is provided before the audio, but not after. Results support the position that speech representations consist of activation over categories and are inconsistent with direct maintenance of the acoustic-phonetic signal.


2019 ◽  
Vol 81 (6) ◽  
pp. 2037-2052 ◽  
Author(s):  
Christian E. Stilp ◽  
Ashley A. Assgari

2019 ◽  
Vol 145 (3) ◽  
pp. 1913-1913
Author(s):  
Andrew Lamont ◽  
Rong Yin ◽  
Aneesh Naik ◽  
John Kingston

2018 ◽  
Vol 8 (1) ◽  
Author(s):  
Gabrielle E. O’Brien ◽  
Daniel R. McCloy ◽  
Emily C. Kubota ◽  
Jason D. Yeatman

2018 ◽  
Author(s):  
Gabrielle O'Brien ◽  
Daniel McCloy ◽  
Jason D. Yeatman

It is established that individuals with dyslexia are less consistent at auditory phoneme categorization than typical readers. One hypothesis attributes differences in phoneme labeling to differences in auditory cue integration over time, suggesting that dyslexics’ performance would improve with longer exposure to informative phonetic cues. Here, the relationship between phoneme labeling and reading ability was investigated while manipulating the duration of steady-state auditory information available in a consonant-vowel syllable. Dyslexics obtained no more benefit from longer cues than did typical readers, suggesting that poor task performance is not explained by deficits temporal integration or temporal sampling.


2018 ◽  
Author(s):  
Colin Noe ◽  
SIMON FISCHER-BAUM

Contextual information influences how we perceive speech, but it remains unclear at which level of processing contextual information merges with acoustic information. Theories differ on whether early, sublexical speech processing levels are strictly feed-forward or are influenced by semantic and lexical context. Studies using behavioral responses have shown contextual factors influence sublexical judgments but are unable to pinpoint whether context biases responses by modulating sublexical processing or later response selection stages. In the current study, we investigate the time-course of context effects by simultaneously recording electroencephalography as an online measure of sublexical speech processing while subjects engage in a lexically biasing phoneme categorization task. We find that lexical context modulates the amplitude of the N100, an ERP component linked with sublexical processes in speech perception. These results support interactive accounts of speech perception over accounts in which early speech perception processes are only driven by bottom up information.


Sign in / Sign up

Export Citation Format

Share Document