Acoustic correlates of clear speech vowel intelligibility for elderly hearing‐impaired listeners.

2010 ◽  
Vol 127 (3) ◽  
pp. 1906-1906
Author(s):  
Sarah Hargus Ferguson
1986 ◽  
Vol 80 (S1) ◽  
pp. S78-S78
Author(s):  
R. M. Uchanski ◽  
L. D. Braida ◽  
N. I. Durlach ◽  
C. M. Reed

1986 ◽  
Vol 29 (4) ◽  
pp. 434-446 ◽  
Author(s):  
M. A. Picheny ◽  
N. I. Durlach ◽  
L. D. Braida

The first paper of this series (Picheny, Durlach, & Braida, 1985) presented evidence that there are substantial intelligibility differences for hearing-impaired listeners between nonsense sentences spoken in a conversational manner and spoken with the effort to produce clear speech. In this paper, we report the results of acoustic analyses performed on the conversational and clear speech. Among these results are the following. First, speaking rate decreases substantially in clear speech. This decrease is achieved both by inserting pauses between words and by lengthening the durations of individual speech sounds. Second, there are differences between the two speaking modes in the numbers and types of phonological phenomena observed. In conversational speech, vowels are modified or reduced, and word-final stop bursts are often not released. In clear speech, vowels are modified to a lesser extent, and stop bursts, as well as essentially all word-final consonants, are released. Third, the RMS intensities for obstruent sounds, particularly stop consonants, is greater in clear speech than in conversational speech. Finally, changes in the long-term spectrum are small. Thus, speaking clearly cannot be regarded as equivalent to the application of high-frequency emphasis.


2013 ◽  
Vol 56 (5) ◽  
pp. 1429-1440 ◽  
Author(s):  
Jennifer Lam ◽  
Kris Tjaden

Purpose The authors investigated how clear speech instructions influence sentence intelligibility. Method Twelve speakers produced sentences in habitual, clear, hearing impaired, and overenunciate conditions. Stimuli were amplitude normalized and mixed with multitalker babble for orthographic transcription by 40 listeners. The main analysis investigated percentage-correct intelligibility scores as a function of the 4 conditions and speaker sex. Additional analyses included listener response variability, individual speaker trends, and an alternate intelligibility measure: proportion of content words correct. Results Relative to the habitual condition, the overenunciate condition was associated with the greatest intelligibility benefit, followed by the hearing impaired and clear conditions. Ten speakers followed this trend. The results indicated different patterns of clear speech benefit for male and female speakers. Greater listener variability was observed for speakers with inherently low habitual intelligibility compared to speakers with inherently high habitual intelligibility. Stable proportions of content words were observed across conditions. Conclusions Clear speech instructions affected the magnitude of the intelligibility benefit. The instruction to overenunciate may be most effective in clear speech training programs. The findings may help explain the range of clear speech intelligibility benefit previously reported. Listener variability analyses suggested the importance of obtaining multiple listener judgments of intelligibility, especially for speakers with inherently low habitual intelligibility.


2011 ◽  
Vol 129 (4) ◽  
pp. 2528-2528
Author(s):  
Jean C. Krause ◽  
R. Ann Siapno ◽  
Ari B. Hansen

2016 ◽  
Vol 140 (4) ◽  
pp. 3441-3441
Author(s):  
Matthew J. Makashay ◽  
Nancy P. Solomon ◽  
Van Summers

1989 ◽  
Vol 32 (3) ◽  
pp. 600-603 ◽  
Author(s):  
M. A. Picheny ◽  
N. I. Durlach ◽  
L. D. Braida

Previous studies (Picheny, Durlach, & Braida, 1985, 1986) have demonstrated that substantial intelligibility differences exist for hearing-impaired listeners for speech spoken clearly compared to speech spoken conversationally. This paper presents the results of a probe experiment intended to determine the contribution of speaking rate to the intelligibility differences. Clear sentences were processed to have the durational properties of conversational speech, and conversational sentences were processed to have the durational properties of clear speech. Intelligibility testing with hearing-impaired listeners revealed both sets of materials to be degraded after processing. However, the degradation could not be attributable to processing artifacts because reprocessing the materials to restore their original durations produced intelligibility scores close to those observed for the unprocessed materials. We conclude that the simple processing to alter the relative durations of the speech materials was not adequate to assess the contribution of speaking rate to the intelligibility differences; further studies are proposed to address this question.


2008 ◽  
Vol 2008 ◽  
pp. 1-7 ◽  
Author(s):  
Hideyuki Sawada ◽  
Mitsuki Kitani ◽  
Yasumori Hayashi

A talking and singing robot which adaptively learns the vocalization skill by means of an auditory feedback learning algorithm is being developed. The robot consists of motor-controlled vocal organs such as vocal cords, a vocal tract and a nasal cavity to generate a natural voice imitating a human vocalization. In this study, the robot is applied to the training system of speech articulation for the hearing-impaired, because the robot is able to reproduce their vocalization and to teach them how it is to be improved to generate clear speech. The paper briefly introduces the mechanical construction of the robot and how it autonomously acquires the vocalization skill in the auditory feedback learning by listening to human speech. Then the training system is described, together with the evaluation of the speech training by auditory impaired people.


Sign in / Sign up

Export Citation Format

Share Document