The Effects of Preoperative Audiovisual Speech Perception on the Audiologic Outcomes of Cochlear Implantation in Patients with Postlingual Deafness

2020 ◽  
pp. 1-8
Author(s):  
Hyun Jin Lee ◽  
Jeon Mi Lee ◽  
Jae Young Choi ◽  
Jinsei Jung

<b><i>Introduction:</i></b> Patients with postlingual deafness usually depend on visual information for communication, and their lipreading ability could influence cochlear implantation (CI) outcomes. However, it is unclear whether preoperative visual dependency in postlingual deafness positively or negatively affects auditory rehabilitation after CI. Herein, we investigated the influence of preoperative audiovisual per­ception on CI outcomes. <b><i>Method:</i></b> In this retrospective case-comparison study, 118 patients with postlingual deafness who underwent unilateral CI were enrolled. Evaluation of speech perception was performed under both audiovisual (AV) and audio-only (AO) conditions before and after CI. Before CI, the speech perception test was performed under hearing aid (HA)-assisted conditions. After CI, the speech perception test was performed under the CI-only condition. Only patients with a 10% or less preoperative AO speech perception score were included. <b><i>Results:</i></b> Multivariable regression analysis showed that age, gender, residual hearing, operation side, education level, and HA usage were not correlated with either postoperative AV (pAV) or AO (pAO) speech perception. However, duration of deafness showed a significant negative correlation with both pAO (<i>p</i> = 0.003) and pAV (<i>p</i> = 0.015) speech perceptions. Notably, the preoperative AV speech perception score was not correlated with pAO speech perception (<i>R</i><sup>2</sup> = 0.00134, <i>p</i> = 0.693) but was positively associated with pAV speech perception (<i>R</i><sup>2</sup> = 0.0731, <i>p</i> = 0.003). <b><i>Conclusion:</i></b> Preoperative dependency on audiovisual information may positively influence pAV speech perception in patients with postlingual deafness.

2017 ◽  
Vol 28 (10) ◽  
pp. 913-919 ◽  
Author(s):  
Margaret A. Meredith ◽  
Jay T. Rubinstein ◽  
Kathleen C. Y. Sie ◽  
Susan J. Norton

Background: Children with steeply sloping sensorineural hearing loss (SNHL) lack access to critical high-frequency cues despite the use of advanced hearing aid technology. In addition, their auditory-only aided speech perception abilities often meet Food and Drug Administration criteria for cochlear implantation. Purpose: The objective of this study was to describe hearing preservation and speech perception outcomes in a group of young children with steeply sloping SNHL who received a cochlear implant (CI). Research Design: Retrospective case series. Study Sample: Eight children with steeply sloping postlingual progressive SNHL who received a unilateral traditional CI at Seattle Children’s Hospital between 2009 and 2013 and had follow-up data available up to 24 mo postimplant were included. Data Collection and Analysis: A retrospective chart review was completed. Medical records were reviewed for demographic information, preoperative and postoperative behavioral hearing thresholds, and speech perception scores. Paired t tests were used to analyze speech perception data. Hearing preservation results are reported. Results: Rapid improvement of speech perception scores was observed within the first month postimplant for all participants. Mean monosyllabic word scores were 76% and mean phoneme scores were 86.7% at 1-mo postactivation compared to mean preimplant scores of 19.5% and 31.0%, respectively. Hearing preservation was observed in five participants out to 24-mo postactivation. Two participants lost hearing in both the implanted and unimplanted ear, and received a sequential bilateral CI in the other ear after progression of the hearing loss. One participant had a total loss of hearing in only the implanted ear. Results reported in this article are from the ear implanted first. Bilateral outcomes are not reported. Conclusions: CIs provided benefit for children with steeply sloping bilateral hearing loss for whom hearing aids did not provide adequate auditory access. In our cohort, significant improvements in speech understanding occurred rapidly postactivation. Preservation of residual hearing in children with a traditional CI electrode is possible.


2000 ◽  
Vol 23 (3) ◽  
pp. 327-328 ◽  
Author(s):  
Lawrence Brancazio ◽  
Carol A. Fowler

The present description of the Merge model addresses only auditory, not audiovisual, speech perception. However, recent findings in the audiovisual domain are relevant to the model. We outline a test that we are conducting of the adequacy of Merge, modified to accept visual information about articulation.


2005 ◽  
Vol 119 (9) ◽  
pp. 719-723 ◽  
Author(s):  
A Daneshi ◽  
S Hassanzadeh ◽  
M Farhadi

Waardenburg syndrome is an autosomal-dominant trait resulting from mutations occurring in different genes. It is often characterized by varying degrees of: congenital hearing loss; dystopia canthorum; synophrys; broad nasal root; depigmentation of hair (white forelock), skin or both; and heterochromic or hypochromic irides.A retrospective case study was done to assess speech perception, speech production, general intelligence and educational setting in six profoundly hearing-impaired children with Waardenburg syndrome (four with type I, one with type II and one with type III) ranging in age from two years to 14 years, seven months (mean = six years, six months). None of the patients had malformation of the cochlea and were implanted using Nucleus 22/24 and Med-el combi40+. Five out of the six cases were of average intelligence and one had a borderline intelligence quotient. The follow-up period ranged from one year, 10 months to six years, six months (mean = three years, six months) after implantation. The evaluation of auditory perception in patients was accomplished using the Persian Auditory Perception Test for the Hearing-Impaired, a Persian Spondee wordstest and the Categories of Auditory Performance Index. The Speech Intelligibility Rating test was used to evaluate speech production ability. All the patients’ speech perception and speech intelligibility capabilities improved considerably after receiving the implants, and they were able to be placed in regular educational settings. Patients used their cochlear-implant devices whenever awake, implying that they benefitted from the devices. We suggest that any further expansion of cochlear-implantation criteria in children include those with Waardenburg syndrome.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 347-347
Author(s):  
M Sams

Persons with hearing loss use visual information from articulation to improve their speech perception. Even persons with normal hearing utilise visual information, especially when the stimulus-to-noise ratio is poor. A dramatic demonstration of the role of vision in speech perception is the audiovisual fusion called the ‘McGurk effect’. When the auditory syllable /pa/ is presented in synchrony with the face articulating the syllable /ka/, the subject usually perceives /ta/ or /ka/. The illusory perception is clearly auditory in nature. We recently studied the audiovisual fusion (acoustical /p/, visual /k/) for Finnish (1) syllables, and (2) words. Only 3% of the subjects perceived the syllables according to the acoustical input, ie in 97% of the subjects the perception was influenced by the visual information. For words the percentage of acoustical identifications was 10%. The results demonstrate a very strong influence of visual information of articulation in face-to-face speech perception. Word meaning and sentence context have a negligible influence on the fusion. We have also recorded neuromagnetic responses of the human cortex when the subjects both heard and saw speech. Some subjects showed a distinct response to a ‘McGurk’ stimulus. The response was rather late, emerging about 200 ms from the onset of the auditory stimulus. We suggest that the perisylvian cortex, close to the source area for the auditory 100 ms response (M100), may be activated by the discordant stimuli. The behavioural and neuromagnetic results suggest a precognitive audiovisual speech integration occurring at a relatively early processing level.


2020 ◽  
Vol 10 (6) ◽  
pp. 328
Author(s):  
Melissa Randazzo ◽  
Ryan Priefer ◽  
Paul J. Smith ◽  
Amanda Nagler ◽  
Trey Avery ◽  
...  

The McGurk effect, an incongruent pairing of visual /ga/–acoustic /ba/, creates a fusion illusion /da/ and is the cornerstone of research in audiovisual speech perception. Combination illusions occur given reversal of the input modalities—auditory /ga/-visual /ba/, and percept /bga/. A robust literature shows that fusion illusions in an oddball paradigm evoke a mismatch negativity (MMN) in the auditory cortex, in absence of changes to acoustic stimuli. We compared fusion and combination illusions in a passive oddball paradigm to further examine the influence of visual and auditory aspects of incongruent speech stimuli on the audiovisual MMN. Participants viewed videos under two audiovisual illusion conditions: fusion with visual aspect of the stimulus changing, and combination with auditory aspect of the stimulus changing, as well as two unimodal auditory- and visual-only conditions. Fusion and combination deviants exerted similar influence in generating congruency predictions with significant differences between standards and deviants in the N100 time window. Presence of the MMN in early and late time windows differentiated fusion from combination deviants. When the visual signal changes, a new percept is created, but when the visual is held constant and the auditory changes, the response is suppressed, evoking a later MMN. In alignment with models of predictive processing in audiovisual speech perception, we interpreted our results to indicate that visual information can both predict and suppress auditory speech perception.


2019 ◽  
Author(s):  
Patrick J. Karas ◽  
John F. Magnotti ◽  
Brian A. Metzger ◽  
Lin L. Zhu ◽  
Kristen B. Smith ◽  
...  

AbstractVision provides a perceptual head start for speech perception because most speech is “mouth-leading”: visual information from the talker’s mouth is available before auditory information from the voice. However, some speech is “voice-leading” (auditory before visual). Consistent with a model in which vision modulates subsequent auditory processing, there was a larger perceptual benefit of visual speech for mouth-leading vs. voice-leading words (28% vs. 4%). The neural substrates of this difference were examined by recording broadband high-frequency activity from electrodes implanted over auditory association cortex in the posterior superior temporal gyrus (pSTG) of epileptic patients. Responses were smaller for audiovisual vs. auditory-only mouth-leading words (34% difference) while there was little difference (5%) for voice-leading words. Evidence for cross-modal suppression of auditory cortex complements our previous work showing enhancement of visual cortex (Ozker et al., 2018b) and confirms that multisensory interactions are a powerful modulator of activity throughout the speech perception network.Impact StatementHuman perception and brain responses differ between words in which mouth movements are visible before the voice is heard and words for which the reverse is true.


2019 ◽  
Author(s):  
Kristin J. Van Engen ◽  
Avanti Dey ◽  
Mitchell Sommers ◽  
Jonathan E. Peelle

Although listeners use both auditory and visual cues during speech perception, the cognitive and neural bases for their integration remain a matter of debate. One common approach to measuring multisensory integration is to use McGurk tasks, in which discrepant auditory and visual cues produce auditory percepts that differ from those based solely on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences in susceptibility are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we argue that McGurk tasks are ill-suited for studying the kind of multisensory speech perception that occurs in real life: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility on McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication.


Sign in / Sign up

Export Citation Format

Share Document