scholarly journals The Effects of Hearing Impairment, Age, and Hearing Aids on the Use of Self-Motion for Determining Front/Back Location

2016 ◽  
Vol 27 (07) ◽  
pp. 588-600 ◽  
Author(s):  
W. Owen Brimijoin ◽  
Michael A. Akeroyd

Background: There are two cues that listeners use to disambiguate the front/back location of a sound source: high-frequency spectral cues associated with the head and pinnae, and self-motion–related binaural cues. The use of these cues can be compromised in listeners with hearing impairment and users of hearing aids. Purpose: To determine how age, hearing impairment, and the use of hearing aids affect a listener’s ability to determine front from back based on both self-motion and spectral cues. Research Design: We used a previously published front/back illusion: signals whose physical source location is rotated around the head at twice the angular rate of the listener’s head movements are perceptually located in the opposite hemifield from where they physically are. In normal-hearing listeners, the strength of this illusion decreases as a function of low-pass filter cutoff frequency, this is the result of a conflict between spectral cues and dynamic binaural cues for sound source location. The illusion was used as an assay of self-motion processing in listeners with hearing impairment and users of hearing aids. Study Sample: We recruited 40 hearing-impaired participants, with an average age of 62 yr. The data for three listeners were discarded because they did not move their heads enough during the experiment. Data Collection and Analysis: Listeners sat at the center of a ring of 24 loudspeakers, turned their heads back and forth, and used a wireless keypad to report the front/back location of statically presented signals and of dynamically moving signals with illusory locations. Front/back accuracy for static signals, the strength of front/back illusions, and minimum audible movement angle were measured for each listener in each condition. All measurements were made in each listener both aided and unaided. Results: Hearing-impaired listeners were less accurate at front/back discrimination for both static and illusory conditions. Neither static nor illusory conditions were affected by high-frequency content. Hearing aids had heterogeneous effects from listener to listener, but independent of other factors, on average, listeners wearing aids exhibited a spectrally dependent increase in “front” responses: the more high-frequency energy in the signal, the more likely they were to report it as coming from the front. Conclusions: Hearing impairment was associated with a decrease in the accuracy of self-motion processing for both static and moving signals. Hearing aids may not always reproduce dynamic self-motion–related cues with sufficient fidelity to allow reliable front/back discrimination.

2014 ◽  
Vol 25 (09) ◽  
pp. 791-803 ◽  
Author(s):  
Evelyne Carette ◽  
Tim Van den Bogaert ◽  
Mark Laureyns ◽  
Jan Wouters

Background: Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. Purpose: In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. Research Design: We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2–3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. Study Sample: A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. Data Collection and Analysis: The participants were positioned in a 13-speaker array (left-right, –90°/+90°) or 7-speaker array (FB, 0–180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. Results were analyzed with repeated-measures analysis of variance. Results: For the left-right localization task, no significant differences could be proven between the unaided condition and both partial directional schemes and the omnidirectional scheme. The soft forward-oriented system and the asymmetric system did show a detrimental effect compared with the unaided condition. On average, localization was worst when users used the asymmetric condition. Analysis of the results of the FB experiment showed good performance, similar to unaided, with both the partial directional systems and the asymmetric configuration. Significantly worse performance was found with the omnidirectional and the omnidirectional soft forward-oriented BTE systems compared with the other hearing-aid systems. Conclusions: Bilaterally fitted partial directional systems preserve (part of) the binaural cues necessary for left-right localization and introduce, preserve, or enhance useful spectral cues that allow FB disambiguation. Omnidirectional systems, although good for left-right localization, do not provide the user with enough spectral information for an optimal FB localization performance.


2017 ◽  
Vol 36 (3) ◽  
pp. 910-916
Author(s):  
DBN Nnadi ◽  
NC Onu ◽  
SE Oti ◽  
CU Ogbuefi

This research paper expounds microcontroller based binaural digital hearing aids for hearing-impaired people by making use of ATmega328 microcontroller and other circuitries to process the audio signal input by either increasing or reducing the gain level of input audio signal, filter background noise, frequencies compression, save battery power and minimize circuit by making use of the internal ADC of the microcontroller and two PMW pins of the microcontroller as DAC. Hearing impairment among the youths and adults nowadays are in the increase, due wrong use of phones of which every minute of the day someone’s earphone is on listening to one type of music or the other. In other to solve the problem created so to say this research work was conceived and given birth to. The different stages of digital hearing aid are designed and then simulated first in Proteus software which then was implemented using PCB-board. The main components of this system were the audio input unit which consists of the microphone and its pre-amplifier, the microcontroller (ATmega328) which consists of the ADC, the DAC and the audio signal processing, the filter stage and control codes (frequencies compression codes, power saver codes, acoustic feedback control codes, signal level control and adaptive adjustment codes etc.), the power amplifier and volume control unit and then the earphones (output). The control codes were written in C language while Ardinuo Uno compiler was used to write the codes into ATmega328.  The prototype has an overall system gain of 27dB and the power output of 32.5mW. The prototype was tested with a patient that has a hearing impairment and the patient was satisfactory with the device. http://dx.doi.org/10.4314/njt.v36i3.34


1992 ◽  
Vol 91 (2) ◽  
pp. 1015-1027 ◽  
Author(s):  
Michael S. Brainard ◽  
Eric I. Knudsen ◽  
Steven D. Esterly

2000 ◽  
Vol 43 (3) ◽  
pp. 661-674 ◽  
Author(s):  
Pamela E. Souza

This study compared the ability of younger and older listeners to use temporal information in speech when that information was altered by compression amplification. Recognition of vowel-consonant-vowel syllables was measured for four groups of adult listeners (younger normal hearing, older normal hearing, younger hearing impaired, older hearing impaired). There were four conditions. Syllables were processed with wide-dynamic range compression (WDRC) amplification and with linear amplification. In each of those conditions, recognition was measured for syllables containing only temporal information and for syllables containing spectral and temporal information. Recognition of WDRC-amplified speech provided an estimate of the ability to use altered amplitude envelope cues. Syllables were presented with a high-frequency masker to minimize confounding differences in high-frequency sensitivity between the younger and older groups. Scores were lower for WDRC-amplified speech than for linearly amplified speech, and older listeners performed more poorly than younger listeners. When spectral information was unrestricted, the age-related decrement was similar for both amplification types. When spectral information was restricted for listeners with normal hearing, the age-related decrement was greater for WDRC-amplified speech than for linearly amplified speech. When spectral information was restricted for listeners with hearing loss, the age-related decrement was similar for both amplification types. Clinically, these results imply that when spectral cues are available (i.e., when the listener has adequate spectral resolution) older listeners can use WDRC hearing aids to the same extent as younger listeners. For older listeners without hearing loss, poorer scores for compression-amplified speech suggest an age-related deficit in temporal resolution.


2009 ◽  
Vol 2009 ◽  
pp. 1-5 ◽  
Author(s):  
Mary S. Shall

Children with hearing deficits frequently have delayed motor development. The purpose of this study was to evaluate saccular function in children with hearing impairments using the Vestibular Evoked Myogenic Potential (VEMP). The impact of the saccular hypofunction on the timely maturation of normal balance strategies was examined using the Movement Assessment Battery for Children (Movement ABC). Thirty-three children with bilateral severe/profound hearing impairment between 4 and 7 years of age were recruited from a three-state area. Approximately half of the sample had one or bilateral cochlear implants, one used bilateral hearing aids, and the rest used no amplification. Parents reported whether the hearing impairment was diagnosed within the first year or after 2 years of age. No VEMP was evoked in two thirds of the hearing impaired (HI) children in response to the bone-conducted stimulus. Children who were reportedly hearing impaired since birth had significantly poorer scores when tested with the Movement ABC.


2016 ◽  
Vol 21 (3) ◽  
pp. 127-131 ◽  
Author(s):  
Michael F. Dorman ◽  
Louise H. Loiselle ◽  
Sarah J. Cook ◽  
William A. Yost ◽  
René H. Gifford

Objective: Our primary aim was to determine whether listeners in the following patient groups achieve localization accuracy within the 95th percentile of accuracy shown by younger or older normal-hearing (NH) listeners: (1) hearing impaired with bilateral hearing aids, (2) bimodal cochlear implant (CI), (3) bilateral CI, (4) hearing preservation CI, (5) single-sided deaf CI and (6) combined bilateral CI and bilateral hearing preservation. Design: The listeners included 57 young NH listeners, 12 older NH listeners, 17 listeners fit with hearing aids, 8 bimodal CI listeners, 32 bilateral CI listeners, 8 hearing preservation CI listeners, 13 single-sided deaf CI listeners and 3 listeners with bilateral CIs and bilateral hearing preservation. Sound source localization was assessed in a sound-deadened room with 13 loudspeakers arrayed in a 180-degree arc. Results: The root mean square (rms) error for the NH listeners was 6 degrees. The 95th percentile was 11 degrees. Nine of 16 listeners with bilateral hearing aids achieved scores within the 95th percentile of normal. Only 1 of 64 CI patients achieved a score within that range. Bimodal CI listeners scored at a level near chance, as did the listeners with a single CI or a single NH ear. Listeners with (1) bilateral CIs, (2) hearing preservation CIs, (3) single-sided deaf CIs and (4) both bilateral CIs and bilateral hearing preservation, all showed rms error scores within a similar range (mean scores between 20 and 30 degrees of error). Conclusion: Modern CIs do not restore a normal level of sound source localization for CI listeners with access to sound information from two ears.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Kamaldeep Sadh ◽  
Urvakhsh M Mehta ◽  
Kesavan Muralidharan ◽  
N Shivashankar ◽  
Sanjeev Jain

Abstract We compared the experience of auditory hallucinations, in persons who have normal (HN; N = 20), or impaired hearing (HI; N = 20), while experiencing psychoses. We assessed this experience across 42 domains and observed that irrespective of the hearing status, patients most often heard voices mainly in the language that they had learnt first (χ2 = 5.584; P = .018). However, a few experienced hallucinations in languages they “did not know” (3/20; 15%). The voices were most often attributed to both males and females (35/40; 87.5%). Those with hearing impairment heard voices closer to their ears, a hubbub of voices of crowds talking to them, and “as if” stuck or repetitive; often in the hearing-impaired ear. The hearing-impaired subjects also reported hearing nonverbal auditory hallucinations more frequently (χ2 = 17.625; P = .001), and the voices lacked emotional salience (χ2 = 4.055; P = .044). In contrast, the hallucinations were experienced in elaborate detail by the HN (20/20), while those with HI often heard only simple sentences (14/20, P = 0.05). The intensity of the hallucinatory voices remained the same on closing the affected ear or both of the ears in the HI group as compared to the HN group. Interestingly, the use of hearing aids attenuated the intensity of the hallucinations (6/7; 85%) in those with HI.


Author(s):  
Amin Fatima Choudhry ◽  
Hafiza Shabnum Noor ◽  
Rabia Shahid ◽  
Tehreem Mukhtar ◽  
Syeda Mariam Zahra ◽  
...  

Aims: This study aims to assess the academic performance of children with hearing impairment who received early intervention in Lahore. Study Design:  Cross sectional survey design was used. Place and Duration of Study: Data was collected from Special Institute/School; Hamza foundation academy Lahore, Pakistan for the duration of six months from March 2021 to September 2021. Methodology: 97 students with moderate to severe sensorineural hearing loss children (aged in between 4 to 12), using hearing aids (HA’s) and cochlear implant (CI) were included by using purposive sampling technique. Hearing impaired children with other than sensorineural hearing loss and children who didn’t receive early intervention (hearing aids/implants or speech therapy) were excluded from this study. Results: It was found that 97 children with hearing impairment achieved significantly in their test score (80 to 99%) across English, Science, and Mathematics as compared to Urdu and Islamiyat (70 to 79%) after the implementation of intervention strategies. Conclusion: The study conclude that, while children with hearing impairment faced struggle in some areas of academics which includes listening and imitation in subjects like Urdu (structure of words) and Islamiyat (due to Arabic talafuz), their  academic performance in Math, English, and Science is higher with overall achieved percentage between  80 to 99%.


2017 ◽  
Vol 18 (35) ◽  
pp. 31-37
Author(s):  
Callum Carroll

Hearing impairment and Latin are not usually something that we pair together, yet it could be seen as an appealing GCSE to a hearing-impaired student. On the one hand learning Latin as an additional language may be seen as difficult when the student is still trying to learn and develop their first language. On the other hand learning Latin does not have, nor require, any formal oral or aural assessment like most Modern Foreign Languages (MFL). My school is an 11-16 mixed comprehensive, where all pupils begin Latin in year 7 on timetable with the choice to continue in Year 8, from where it is taught as an off-timetable subject through to GCSE. Here I was introduced to teach a Year 10 boy who has congenital severe sensory-hearing loss in both ears and he wears two Nathos UP digital hearing aids all waking hours. I was immediately interested in his reasons for choosing to study Latin, especially as it is an elective GCSE timetabled before school and during lunch time. Prior to this encounter, I had recently attended a session on British Sign Language and the experience of Deaf and hearing-impaired students in school. Hearing about their experiences led me to think about the benefits of learning Latin, which consequently led me to investigate Kim's experience (name changed for confidentiality). I began focusing my observations on him and informally questioning his teachers; I was met by an array of praise stating that he proudly saw his hearing impairment as part of his identity rather than an obstacle. I started searching for information on teaching Latin to hearing-impaired pupils, but there was almost nothing specifically related to Latin. As a result, I saw this as my opportunity to collate my experiences and the experiences of the pupil and teacher to develop my research into something that may benefit other Latin teachers of hearing-impaired pupils.


Sign in / Sign up

Export Citation Format

Share Document