Identification of temporal envelope cues in Chinese tone recognition

2000 ◽  
Vol 5 (1) ◽  
pp. 45-57 ◽  
Author(s):  
Qian-Jie Fu ◽  
Fan-Gang Zeng
2021 ◽  
Vol 15 ◽  
Author(s):  
Zhong Zheng ◽  
Keyi Li ◽  
Gang Feng ◽  
Yang Guo ◽  
Yinan Li ◽  
...  

Objectives: Mandarin-speaking users of cochlear implants (CI) perform poorer than their English counterpart. This may be because present CI speech coding schemes are largely based on English. This study aims to evaluate the relative contributions of temporal envelope (E) cues to Mandarin phoneme (including vowel, and consonant) and lexical tone recognition to provide information for speech coding schemes specific to Mandarin.Design: Eleven normal hearing subjects were studied using acoustic temporal E cues that were extracted from 30 continuous frequency bands between 80 and 7,562 Hz using the Hilbert transform and divided into five frequency regions. Percent-correct recognition scores were obtained with acoustic E cues presented in three, four, and five frequency regions and their relative weights calculated using the least-square approach.Results: For stimuli with three, four, and five frequency regions, percent-correct scores for vowel recognition using E cues were 50.43–84.82%, 76.27–95.24%, and 96.58%, respectively; for consonant recognition 35.49–63.77%, 67.75–78.87%, and 87.87%; for lexical tone recognition 60.80–97.15%, 73.16–96.87%, and 96.73%. For frequency region 1 to frequency region 5, the mean weights in vowel recognition were 0.17, 0.31, 0.22, 0.18, and 0.12, respectively; in consonant recognition 0.10, 0.16, 0.18, 0.23, and 0.33; in lexical tone recognition 0.38, 0.18, 0.14, 0.16, and 0.14.Conclusion: Regions that contributed most for vowel recognition was Region 2 (502–1,022 Hz) that contains first formant (F1) information; Region 5 (3,856–7,562 Hz) contributed most to consonant recognition; Region 1 (80–502 Hz) that contains fundamental frequency (F0) information contributed most to lexical tone recognition.


2009 ◽  
Vol 10 (sup1) ◽  
pp. 148-158
Author(s):  
Kevin CP Yuen ◽  
Michael CF Tong ◽  
Van Charles A Hasselt ◽  
Meng Yuan ◽  
Tan Lee ◽  
...  

2012 ◽  
Vol 43 (01) ◽  
Author(s):  
L Timm ◽  
D Agrawal ◽  
M Wittfoth ◽  
R Dengler

2002 ◽  
Vol 87 (4) ◽  
pp. 1723-1737 ◽  
Author(s):  
Srikantan S. Nagarajan ◽  
Steven W. Cheung ◽  
Purvis Bedenbaugh ◽  
Ralph E. Beitel ◽  
Christoph E. Schreiner ◽  
...  

Cortical sensitivity in representations of behaviorally relevant complex input signals was examined in recordings from primary auditory cortical neurons (AI) in adult, barbiturate-anesthetized common marmoset monkeys ( Callithrix jacchus). We studied the robustness of distributed responses to natural and degraded forms of twitter calls, social contact vocalizations comprising several quasi-periodic phrases of frequency and AM. We recorded neuronal responses to a monkey's own twitter call (MOC), degraded forms of their twitter call, and sinusoidal amplitude modulated (SAM) tones with modulation rates similar to those of twitter calls. In spectral envelope degradation, calls with narrowband channels of varying bandwidths had the same temporal envelope as a natural call. However, the carrier phase was randomized within each narrowband channel. In temporal envelope degradation, the temporal envelope within narrowband channels was filtered while the carrier frequencies and phases remained unchanged. In a third form of degradation, noise was added to the natural calls. Spatiotemporal discharge patterns in AI both within and across frequency bands encoded spectrotemporal acoustic features in the call although the encoded response is an abstract version of the call. The average temporal response pattern in AI, however, was significantly correlated with the average temporal envelope for each phrase of a call. Response entrainment to MOC was significantly correlated with entrainment to SAM stimuli at comparable modulation frequencies. Sensitivity of the response patterns to MOC was substantially greater for temporal envelope than for spectral envelope degradations. The distributed responses in AI were robust to additive continuous noise at signal-to-noise ratios ≥10 dB. Neurophysiological data reflecting response sensitivity in AI to these forms of degradation closely parallel human psychophysical results on the intelligibility of degraded speech in quiet and noisy conditions.


Sign in / Sign up

Export Citation Format

Share Document