Physiological Assessment of Speech and Voice Production of Adults With Hearing Loss

1994 ◽  
Vol 37 (3) ◽  
pp. 510-521 ◽  
Author(s):  
Maureen B. Higgins ◽  
Arlene E. Carney ◽  
Laura Schulte

The purpose of this investigation was to study the impact of hearing loss on phonatory, velopharyngeal, and articulatory functioning using a comprehensive physiological approach. Electroglottograph (EGG), nasal/oral air flow, and intraoral air pressure signals were recorded simultaneously from adults with impaired and normal hearing as they produced syllables and words of varying physiological difficulty. The individuals with moderate-to-profound hearing loss had good to excellent oral communication skills. Intraoral pressure, nasal air flow, durations of lip, velum, and vocal fold articulations, estimated subglottal pressure, mean phonatory air flow, fundamental frequency, and EGG abduction quotient were compared between the two subject groups. Data from the subjects with hearing loss also were compared across aided and unaided conditions to investigate the influence of auditory feedback on speech motor control. The speakers with hearing loss had significantly higher intraoral pressures, subglottal pressures, laryngeal resistances, and fundamental frequencies than those with normal hearing. There was notable between-subject variability. All of the individuals with profound hearing loss had at least one speech/voice physiology measure that fell outside of the normal range, and most of the subjects demonstrated unique clusters of abnormal behaviors. Abnormal behaviors were more evident in the phonatory than articulatory or velopharyngeal systems and were generally consistent with vocal fold hyperconstriction. There was evidence from individual data that vocal fold posturing influenced articulatory timing. The results did not support the idea that the speech production skills of adults with moderate-to-profound hearing loss who are good oral communicators deteriorate when there are increased motoric demands on the velopharyngeal and phonatory mechanism. Although no significant differences were found between the aided and unaided conditions, 7 of 10 subjects showed the same direction of change for subglottal pressure, intraoral pressure, nasal air flow, and the duration of lip and vocal fold articulations. We conclude that physiological assessments provide important information about the speech/voice production abilities of individuals with moderate-to-profound hearing loss and are a valuable addition to standard assessment batteries.

2021 ◽  
pp. 102986492110152
Author(s):  
Carl Hopkins ◽  
Saúl Maté-Cid ◽  
Robert Fulford ◽  
Gary Seiffert ◽  
Jane Ginsborg

This study investigated the perception and learning of relative pitch using vibrotactile stimuli by musicians with and without a hearing impairment. Notes from C3 to B4 were presented to the fingertip and forefoot. Pre- and post-training tests in which 420 pairs of notes were presented randomly were carried out without any feedback to participants. After the pre-training test, 16 short training sessions were carried out over six weeks with 72 pairs of notes per session and participants told whether their answers were correct. For amateur and professional musicians with normal hearing and professional musicians with a severe or profound hearing loss, larger pitch intervals were easier to identify correctly than smaller intervals. Musicians with normal hearing had a high success rate for relative pitch discrimination as shown by pre- and post-training tests, and when using the fingertips, there was no significant difference between amateur and professional musicians. After training, median scores on the tests in which stimuli were presented to the fingertip and forefoot were >70% for intervals of 3–12 semitones. Training sessions reduced the variability in the responses of amateur and professional musicians with normal hearing and improved their overall ability. There was no significant difference between the relative pitch discrimination abilities between one and 11 semitones, as shown by the pre-training test, of professional musicians with and without a severe/profound hearing loss. These findings indicate that there is potential for vibration to be used to facilitate group musical performance and music education in schools for the deaf.


2019 ◽  
Vol 23 ◽  
pp. 233121651988761 ◽  
Author(s):  
Gilles Courtois ◽  
Vincent Grimaldi ◽  
Hervé Lissek ◽  
Philippe Estoppey ◽  
Eleftheria Georganti

The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.


2021 ◽  
Author(s):  
Marlies Gillis ◽  
Lien Decruy ◽  
Jonas Vanthornhout ◽  
Tom Francart

AbstractWe investigated the impact of hearing loss on the neural processing of speech. Using a forward modelling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers.Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both population; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed.Our results support that the neural response latency indicates the efficiency of neural speech processing. Hearing-impaired listeners process speech in silence less efficiently then normal-hearing listeners. Our results suggest that this reduction in neural speech processing efficiency is a gradual effect which occurs as hearing deteriorates. Moreover, the efficiency of neural speech processing in hearing-impaired listeners is already at its lowest level when listening to speech in quiet, while normal-hearing listeners show a further decrease in efficiently when the noise level increases.From our results, it is apparent that sound amplification does not solve hearing loss. Even when intelligibility is apparently perfect, hearing-impaired listeners process speech less efficiently.


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Torsten Rahne ◽  
Lars Böhme ◽  
Gerrit Götze

The identification and discrimination of timbre are essential features of music perception. One dominating parameter within the multidimensional timbre space is the spectral shape of complex sounds. As hearing loss interferes with the perception and enjoyment of music, we approach the individual timbre discrimination skills in individuals with severe to profound hearing loss using a cochlear implant (CI) and normal hearing individuals using a bone-anchored hearing aid (Baha). With a recent developed behavioral test relying on synthetically sounds forming a spectral continuum, the timbre difference was changed adaptively to measure the individual just noticeable difference (JND) in a forced-choice paradigm. To explore the differences in timbre perception abilities caused by the hearing mode, the sound stimuli were varied in their fundamental frequency, thus generating different spectra which are not completely covered by a CI or Baha system. The resulting JNDs demonstrate differences in timbre perception between normal hearing individuals, Baha users, and CI users. Beside the physiological reasons, also technical limitations appear as the main contributing factors.


2010 ◽  
Vol 21 (08) ◽  
pp. 493-511
Author(s):  
Amanda J. Ortmann ◽  
Catherine V. Palmer ◽  
Sheila R. Pratt

Background: A possible voicing cue used to differentiate voiced and voiceless cognate pairs is envelope onset asynchrony (EOA). EOA is the time between the onsets of two frequency bands of energy (in this study one band was high-pass filtered at 3000 Hz, the other low-pass filtered at 350 Hz). This study assessed the perceptual impact of manipulating EOA on voicing perception of initial stop consonants, and whether normal-hearing and hearing-impaired listeners were sensitive to changes in EOA as a cue for voicing. Purpose: The purpose of this study was to examine the effect of spectrally asynchronous auditory delay on the perception of voicing associated with initial stop consonants by normal-hearing and hearing-impaired listeners. Research Design: Prospective experimental study comparing the perceptual differences of manipulating the EOA cues for two groups of listeners. Study Sample: Thirty adults between the ages of 21 and 60 yr completed the study: 17 listeners with normal hearing and 13 listeners with mild-moderate sensorineural hearing loss. Data Collection and Analysis: The participants listened to voiced and voiceless stop consonants within a consonant-vowel syllable structure. The EOA of each syllable was varied along a continuum, and identification and discrimination tasks were used to determine if the EOA manipulation resulted in categorical shifts in voicing perception. In the identification task the participants identified the consonants as belonging to one of two categories (voiced or voiceless cognate). They also completed a same-different discrimination task with the same set of stimuli. Categorical perception was confirmed with a d-prime sensitivity measure by examining how accurately the results from the identification task predicted the discrimination results. The influence of EOA manipulations on the perception of voicing was determined from shifts in the identification functions and discrimination peaks along the EOA continuum. The two participant groups were compared in order to determine the impact of EOA on voicing perception as a function of syllable and hearing status. Results: Both groups of listeners demonstrated a categorical shift in voicing perception with manipulation of EOA for some of the syllables used in this study. That is, as the temporal onset asynchrony between low- and high-frequency bands of speech was manipulated, the listeners' perception of consonant voicing changed between voiced and voiceless categories. No significant differences were found between listeners with normal hearing and listeners with hearing loss as a result of the EOA manipulation. Conclusions: The results of this study suggested that both normal-hearing and hearing-impaired listeners likely use spectrally asynchronous delays found in natural speech as a cue for voicing distinctions. While delays in modern hearing aids are less than those used in this study, possible implications are that additional asynchronous delays from digital signal processing or open-fitting amplification schemes might cause listeners with hearing loss to misperceive voicing cues.


2015 ◽  
Vol 58 (3) ◽  
pp. 1017-1032 ◽  
Author(s):  
Julia Z. Sarant ◽  
David C. Harris ◽  
Lisa A. Bennet

Purpose This study sought to (a) determine whether academic outcomes for children who received early cochlear implants (CIs) are age appropriate, (b) determine whether bilateral CI use significantly improves academic outcomes, and (c) identify other factors that are predictive of these outcomes. Method Forty-four 8-year-old children with severe–profound hearing loss participated in this study. Their academic development in mathematics, oral language, reading, and written language was assessed using a standardized test of academic achievement. Results (a) Across all academic areas, the proportion of children in the average or above-average ranges was lower than expected for children with normal hearing. The strongest area of performance was written language, and the weakest was mathematics. (b) Children using bilateral CIs achieved significantly higher scores for oral language, math, and written language, after controlling for predictive factors, than did children using unilateral CIs. Younger ages at second CI predicted the largest improvements. (c) High levels of parental involvement and greater time spent by children reading significantly predicted academic success, although other factors were identified. Conclusions Average academic outcomes for these children were below those of children with normal hearing. Having bilateral CIs at younger ages predicted the best outcomes. Family environment was also important to children's academic performance.


1996 ◽  
Vol 115 (1) ◽  
pp. 70-77 ◽  
Author(s):  
Peter A. Selz ◽  
Marian Girardi ◽  
Horst R. Konrad ◽  
Larry F. Hughes

Considerable knowledge has been accumulated regarding acquired and congenital deafness in children. However, despite the intimate relationship between the auditory and vestibular systems, data are limited regarding the status of the balance system in these children. Using a test population of 15 children, aged 8 to 17 years, we performed electronystagmography testing. The test battery consisted of the eye-tracking (gaze nystagmus, spontaneous nystagmus, saccade, horizontal pursuit and optokinetic) tests, positional/positioning (Dix-Hallpike and supine) tests, and rotational chair tests. With age-matched controls, five children were tested in each of the following three categories: normal hearing, hereditary deafness, and acquired deafness. The children in the hereditary deafness category were congenitally deaf and had a family history of deafness. Those subjects in the acquired deafness category had hearing loss before the age of 2 years, after meningitis. Analysis of variance demonstrated significant differences between the two deaf groups and the control subjects in the gaze nystagmus test, saccade latencies, horizontal pursuit phase, and Dix-Hallpike and supine positionally provoked nystagmus. Also, significant differences were found in rotational chair gain and phase between the deaf and normal-hearing children. The children with acquired deafness exhibited the most profound results. In addition, there were significant differences in rotational chair gain between the acquired and congenitally deaf children. No differences were noted in horizontal pursuit gains, saccade accuracies, or saccade asymmetries. These preliminary data demonstrate that the etiologic factors responsible for congenital and acquired deafness in children may indeed affect the balance system as well. These findings of possible balance disorders in conjunction with the profound hearing loss in this patient population will have prognostic implications in the future evaluation, treatment, and rehabilitation of these patients.


Author(s):  
Ying Yang ◽  
Yanan Xiao ◽  
Yulu Liu ◽  
Qiong Li ◽  
Changshuo Shan ◽  
...  

Background: This study compares the mental health and psychological response of students with or without hearing loss during the recurrence of the COVID-19 pandemic in Beijing, the capital of China. It explores the relevant factors affecting mental health and provides evidence-driven strategies to reduce adverse psychological impacts during the COVID-19 pandemic. Methods: We used the Chinese version of depression, anxiety, and stress scale 21 (DASS-21) to assess the mental health and the impact of events scale—revised (IES-R) to assess the COVID-19 psychological impact. Results: The students with hearing loss are frustrated with their disability and particularly vulnerable to stress symptoms, but they are highly endurable in mitigating this negative impact on coping with their well-being and responsibilities. They are also more resilient psychologically but less resistant mentally to the pandemic impacts than the students with normal hearing. Their mental and psychological response to the pandemic is associated with more related factors and variables than that of the students with normal hearing is. Conclusions: To safeguard the welfare of society, timely information on the pandemic, essential services for communication disorders, additional assistance and support in mental counseling should be provided to the vulnerable persons with hearing loss that are more susceptible to a public health emergency.


2021 ◽  
Vol 12 ◽  
Author(s):  
Emiro J. Ibarra ◽  
Jesús A. Parra ◽  
Gabriel A. Alzamendi ◽  
Juan P. Cortés ◽  
Víctor M. Espinoza ◽  
...  

The ambulatory assessment of vocal function can be significantly enhanced by having access to physiologically based features that describe underlying pathophysiological mechanisms in individuals with voice disorders. This type of enhancement can improve methods for the prevention, diagnosis, and treatment of behaviorally based voice disorders. Unfortunately, the direct measurement of important vocal features such as subglottal pressure, vocal fold collision pressure, and laryngeal muscle activation is impractical in laboratory and ambulatory settings. In this study, we introduce a method to estimate these features during phonation from a neck-surface vibration signal through a framework that integrates a physiologically relevant model of voice production and machine learning tools. The signal from a neck-surface accelerometer is first processed using subglottal impedance-based inverse filtering to yield an estimate of the unsteady glottal airflow. Seven aerodynamic and acoustic features are extracted from the neck surface accelerometer and an optional microphone signal. A neural network architecture is selected to provide a mapping between the seven input features and subglottal pressure, vocal fold collision pressure, and cricothyroid and thyroarytenoid muscle activation. This non-linear mapping is trained solely with 13,000 Monte Carlo simulations of a voice production model that utilizes a symmetric triangular body-cover model of the vocal folds. The performance of the method was compared against laboratory data from synchronous recordings of oral airflow, intraoral pressure, microphone, and neck-surface vibration in 79 vocally healthy female participants uttering consecutive /pæ/ syllable strings at comfortable, loud, and soft levels. The mean absolute error and root-mean-square error for estimating the mean subglottal pressure were 191 Pa (1.95 cm H2O) and 243 Pa (2.48 cm H2O), respectively, which are comparable with previous studies but with the key advantage of not requiring subject-specific training and yielding more output measures. The validation of vocal fold collision pressure and laryngeal muscle activation was performed with synthetic values as reference. These initial results provide valuable insight for further vocal fold model refinement and constitute a proof of concept that the proposed machine learning method is a feasible option for providing physiologically relevant measures for laboratory and ambulatory assessment of vocal function.


Sign in / Sign up

Export Citation Format

Share Document