Steady-state suppression in reverberation: a comparison of native and nonnative speech perception

Author(s):  
Nao Hodoshima ◽  
Dawn Behne ◽  
Takayuki Arai
1992 ◽  
Vol 35 (1) ◽  
pp. 192-200 ◽  
Author(s):  
Michele L. Steffens ◽  
Rebecca E. Eilers ◽  
Karen Gross-Glenn ◽  
Bonnie Jallad

Speech perception was investigated in a carefully selected group of adult subjects with familial dyslexia. Perception of three synthetic speech continua was studied: /a/-//, in which steady-state spectral cues distinguished the vowel stimuli; /ba/-/da/, in which rapidly changing spectral cues were varied; and /sta/-/sa/, in which a temporal cue, silence duration, was systematically varied. These three continua, which differed with respect to the nature of the acoustic cues discriminating between pairs, were used to assess subjects’ abilities to use steady state, dynamic, and temporal cues. Dyslexic and normal readers participated in one identification and two discrimination tasks for each continuum. Results suggest that dyslexic readers required greater silence duration than normal readers to shift their perception from /sa/ to /sta/. In addition, although the dyslexic subjects were able to label and discriminate the synthetic speech continua, they did not necessarily use the acoustic cues in the same manner as normal readers, and their overall performance was generally less accurate.


1980 ◽  
Vol 23 (2) ◽  
pp. 419-428 ◽  
Author(s):  
Rebecca E. Eilers ◽  
D. Kimbrough Oller

The discrimination of minimally paired speech sounds by seven retarded children with a mean age of 3 years, 2 months and a mean IQ of 38.4 was compared with the discrimination performance of eight normally developing 7-month-old infants. Children and infants were tested using the Visually Reinforced Infant Speech Discrimination (VRISD) paradigm in which they were taught to respond with a head turn to a change in a repeating background auditory stimulus. Responses were reinforced by activation of an animated toy. All children proved to be conditionable and both groups evidenced discrimination of the speech contrasts tested. The data suggest that the retarded children have more difficulty processing a contrast cued by rapid spectral changes (often associated with consonant discrimination) than they do a contrast cued by steady-state spectral information (often associated with the perception of slowly articulated vowels): The normally developing infants did not find rapid spectral cues more difficult than steady-state cues. These results parallel those of Tallal (1976) who found that dynamic cues were specifically difficult for dysphasic children (with normal nonverbal intelligence), but not for linguistically-normal elementary school children.


2014 ◽  
Vol 2014 ◽  
pp. 1-8 ◽  
Author(s):  
Venugopal Manju ◽  
Kizhakke Kodiyath Gopika ◽  
Pitchai Muthu Arivudai Nambi

Amplitude modulations in the speech convey important acoustic information for speech perception. Auditory steady state response (ASSR) is thought to be physiological correlate of amplitude modulation perception. Limited research is available exploring association between ASSR and modulation detection ability as well as speech perception. Correlation of modulation detection thresholds (MDT) and speech perception in noise with ASSR was investigated in twofold experiments. 30 normal hearing individuals and 11 normal hearing individuals within age range of 18–24 years participated in experiments 1 and 2, respectively. MDTs were measured using ASSR and behavioral method at 60 Hz, 80 Hz, and 120 Hz modulation frequencies in the first experiment. ASSR threshold was obtained by estimating the minimum modulation depth required to elicit ASSR (ASSR-MDT). There was a positive correlation between behavioral MDT and ASSR-MDT at all modulation frequencies. In the second experiment, ASSR for amplitude modulation (AM) sweeps at four different frequency ranges (30–40 Hz, 40–50 Hz, 50–60 Hz, and 60–70 Hz) was recorded. Speech recognition threshold in noise (SRTn) was estimated using staircase procedure. There was a positive correlation between amplitude of ASSR for AM sweep with frequency range of 30–40 Hz and SRTn. Results of the current study suggest that ASSR provides substantial information about temporal modulation and speech perception.


1974 ◽  
Vol 17 (2) ◽  
pp. 203-222 ◽  
Author(s):  
R. D. Kent

Auditory-motor formant tracking, or the vocal reproduction of formant patterns, is one aspect of speech imitation skill. The study reported here assessed the ability of four adult speakers to imitate synthesized vocalic stimuli. These stimuli took the form of two steady-state segments joined by a transitional segment. The first steady-state segment corresponded to one of eight American English vowels, and the second, to one of 14 vowels that were not expected to have a prominent phonemic identity in the language. Spectrographic analyses of the imitative responses allowed comparisons of the formant structure for the synthesized stimuli and the corresponding human reproductions. Analyses of the spectrograms revealed that the directions of movement for the first two formants were almost always reproduced accurately, but the extent of movement frequently was overshot. These responses were judged to be consistent with a contrast effect in speech perception, a phenomenon previously discovered in experiments on vowel identification. The variability of formant reproduction for a given vowel was predicted at least roughly by the ambiguity of the stimulus in a preliminary identification experiment. These results suggest that the responses in an imitation task are intermediate in dimensionality to the responses in discrimination and identification tasks.


2019 ◽  
Vol 40 (2) ◽  
pp. 585-611
Author(s):  
ALEXANDER J. KILPATRICK ◽  
RIKKE L. BUNDGAARD-NIELSEN ◽  
BRETT J. BAKER

ABSTRACTMost current models of nonnative speech perception (e.g., extended perceptual assimilation model, PAM-L2, Best & Tyler, 2007; speech learning model, Flege, 1995; native language magnet model, Kuhl, 1993) base their predictions on the native/nonnative status of individual phonetic/phonological segments. This paper demonstrates that the phonotactic properties of Japanese influence the perception of natively contrasting consonants and suggests that phonotactic influence must be formally incorporated in these models. We first propose that by extending the perceptual categories outlined in PAM-L2 to incorporate sequences of sounds, we can account for the effects of differences in native and nonnative phonotactics on nonnative and cross-language segmental perception. In addition, we test predictions based on such an extension in two perceptual experiments. In Experiment 1, Japanese listeners categorized and rated vowel–consonant–vowel strings in combinations that either obeyed or violated Japanese phonotactics. The participants categorized phonotactically illegal strings to the perceptually nearest (legal) categories. In Experiment 2, participants discriminated the same strings in AXB discrimination tests. Our results show that Japanese listeners are more accurate and have faster response times when discriminating between legal strings than between legal and illegal strings. These findings expose serious shortcomings in currently accepted nonnative perception models, which offer no framework for the influence of native language phonotactics.


1967 ◽  
Vol 19 (1) ◽  
pp. 59-63 ◽  
Author(s):  
Donald Shankweiler ◽  
Michael Studdert-Kennedy

The results of earlier studies by several authors suggest that speech and nonspeech auditory patterns are processed primarily in different places in the brain and perhaps by different modes. The question arises in studies of speech perception whether all phonetic elements or all features of phonetic elements are processed in the same way. The technique of dichotic presentation was used to examine this question. The present study compared identifications of dichotically presented pairs of synthetic CV syllables and pairs of steady-state vowels. The results show a significant right-ear advantage for CV syllables but not for steady-state vowels. Evidence for analysis by feature in the perception of consonants is discussed.


Author(s):  
R. C. Moretz ◽  
G. G. Hausner ◽  
D. F. Parsons

Use of the electron microscope to examine wet objects is possible due to the small mass thickness of the equilibrium pressure of water vapor at room temperature. Previous attempts to examine hydrated biological objects and water itself used a chamber consisting of two small apertures sealed by two thin films. Extensive work in our laboratory showed that such films have an 80% failure rate when wet. Using the principle of differential pumping of the microscope column, we can use open apertures in place of thin film windows.Fig. 1 shows the modified Siemens la specimen chamber with the connections to the water supply and the auxiliary pumping station. A mechanical pump is connected to the vapor supply via a 100μ aperture to maintain steady-state conditions.


2021 ◽  
Author(s):  
Wu Lan ◽  
Yuan Peng Du ◽  
Songlan Sun ◽  
Jean Behaghel de Bueren ◽  
Florent Héroguel ◽  
...  

We performed a steady state high-yielding depolymerization of soluble acetal-stabilized lignin in flow, which offered a window into challenges and opportunities that will be faced when continuously processing this feedstock.


Sign in / Sign up

Export Citation Format

Share Document