Travel safety issues related to digital hearing aids—Assessment of the effect of compression in sound localization and distance evaluation tasks by blindfolded hearing impaired adults

2005 ◽  
Vol 1282 ◽  
pp. 278-282 ◽  
Author(s):  
Agathe Ratelle ◽  
Julie Dufour ◽  
Tony Leroux ◽  
Chantal Laroche ◽  
Christian Giguère ◽  
...  
2017 ◽  
Vol 36 (3) ◽  
pp. 910-916
Author(s):  
DBN Nnadi ◽  
NC Onu ◽  
SE Oti ◽  
CU Ogbuefi

This research paper expounds microcontroller based binaural digital hearing aids for hearing-impaired people by making use of ATmega328 microcontroller and other circuitries to process the audio signal input by either increasing or reducing the gain level of input audio signal, filter background noise, frequencies compression, save battery power and minimize circuit by making use of the internal ADC of the microcontroller and two PMW pins of the microcontroller as DAC. Hearing impairment among the youths and adults nowadays are in the increase, due wrong use of phones of which every minute of the day someone’s earphone is on listening to one type of music or the other. In other to solve the problem created so to say this research work was conceived and given birth to. The different stages of digital hearing aid are designed and then simulated first in Proteus software which then was implemented using PCB-board. The main components of this system were the audio input unit which consists of the microphone and its pre-amplifier, the microcontroller (ATmega328) which consists of the ADC, the DAC and the audio signal processing, the filter stage and control codes (frequencies compression codes, power saver codes, acoustic feedback control codes, signal level control and adaptive adjustment codes etc.), the power amplifier and volume control unit and then the earphones (output). The control codes were written in C language while Ardinuo Uno compiler was used to write the codes into ATmega328.  The prototype has an overall system gain of 27dB and the power output of 32.5mW. The prototype was tested with a patient that has a hearing impairment and the patient was satisfactory with the device. http://dx.doi.org/10.4314/njt.v36i3.34


1994 ◽  
Vol 73 (3) ◽  
pp. 176-179 ◽  
Author(s):  
Barry P. Kimberley ◽  
Rob Dymond ◽  
Abram Gamer

The rehabilitation of binaural hearing performance in hearing impaired listeners has received relatively little attention to date. Both localization ability and speech-understanding-in noise are affected in the impaired listener. When localization performance is tested in impaired ears with conventional hearing aid fittings it is found to be worse than the unaided condition. Advances in electronic design now permit speculation about the implementation of complex digital filters within the confines of an in-the-ear hearing aid. We have begun exploring strategies to enhance the localization performance of impaired listeners with bilateral digital signal processing. We are examining three strategies in bilateral hearing aid design to improve localization performance in hearing impaired listeners, namely 1) more accurate fitting of individual ear losses, 2) equalization of the effect of the hearing aid itself on the acoustics within the ear canal, and 3) binaural fitting strategies which in effect modify individual ear fittings to enhance localization performance. The results of early psychophysical testing suggests that localization performance can be improved with these strategies.


2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740059 ◽  
Author(s):  
Ruiyu Liang ◽  
Ji Xi ◽  
Yongqiang Bao

To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.


2017 ◽  
Vol 18 (35) ◽  
pp. 31-37
Author(s):  
Callum Carroll

Hearing impairment and Latin are not usually something that we pair together, yet it could be seen as an appealing GCSE to a hearing-impaired student. On the one hand learning Latin as an additional language may be seen as difficult when the student is still trying to learn and develop their first language. On the other hand learning Latin does not have, nor require, any formal oral or aural assessment like most Modern Foreign Languages (MFL). My school is an 11-16 mixed comprehensive, where all pupils begin Latin in year 7 on timetable with the choice to continue in Year 8, from where it is taught as an off-timetable subject through to GCSE. Here I was introduced to teach a Year 10 boy who has congenital severe sensory-hearing loss in both ears and he wears two Nathos UP digital hearing aids all waking hours. I was immediately interested in his reasons for choosing to study Latin, especially as it is an elective GCSE timetabled before school and during lunch time. Prior to this encounter, I had recently attended a session on British Sign Language and the experience of Deaf and hearing-impaired students in school. Hearing about their experiences led me to think about the benefits of learning Latin, which consequently led me to investigate Kim's experience (name changed for confidentiality). I began focusing my observations on him and informally questioning his teachers; I was met by an array of praise stating that he proudly saw his hearing impairment as part of his identity rather than an obstacle. I started searching for information on teaching Latin to hearing-impaired pupils, but there was almost nothing specifically related to Latin. As a result, I saw this as my opportunity to collate my experiences and the experiences of the pupil and teacher to develop my research into something that may benefit other Latin teachers of hearing-impaired pupils.


2021 ◽  
Author(s):  
Jiming Yang

Hearing-impaired listeners often have great difficulty understanding speech in a noisy background. The problem has motivated the development of a new speech enhancement scheme with the goal of improving speech in noise perception for the hearing impaired listeners. In this thesis, a novel wavelet packet based noise reduction algorithm and hearing loss compensation are presented for a single microphone hearing aids application. The noise reduction scheme utilizes noise masking threshold based suppression rule to remove additive noise. The perceptual noise suppression rule is optimized to achieve a balance between noise removal and speech distortion. Both objective and subjective evaluations have shown superior performance of the proposed technique in a good combination of low residual noise and low signal distortion. The hearing loss compensation is realized by the wavelet-based loudness compression in each critical band. The compensated speech is guaranteed above hearing-impaired listener's threshold of hearing and with growth of loudness corrected in the dynamic range. Preference test among normal hearing person with simulated hearing loss has shown compensated speeches are favored in various conditions.


2021 ◽  
Author(s):  
Jiming Yang

Hearing-impaired listeners often have great difficulty understanding speech in a noisy background. The problem has motivated the development of a new speech enhancement scheme with the goal of improving speech in noise perception for the hearing impaired listeners. In this thesis, a novel wavelet packet based noise reduction algorithm and hearing loss compensation are presented for a single microphone hearing aids application. The noise reduction scheme utilizes noise masking threshold based suppression rule to remove additive noise. The perceptual noise suppression rule is optimized to achieve a balance between noise removal and speech distortion. Both objective and subjective evaluations have shown superior performance of the proposed technique in a good combination of low residual noise and low signal distortion. The hearing loss compensation is realized by the wavelet-based loudness compression in each critical band. The compensated speech is guaranteed above hearing-impaired listener's threshold of hearing and with growth of loudness corrected in the dynamic range. Preference test among normal hearing person with simulated hearing loss has shown compensated speeches are favored in various conditions.


2012 ◽  
Vol 29 (1-2) ◽  
Author(s):  
Shaista Majid

Hearing aids have been used successfully and efficiently for many decades for rehabilitation of hearing impaired children. In present era, advancement in technologies has brought varieties hearing aids that enable hearing impaired children to utilize their residual hearing efficiently for speech and language learning. Recently two types of hearing aids are available according to amplification circuitry, i.e. analog and digital. The present study was aimed at comparing articulation of children using digital hearing aids (DHA) with analog, the non-digital hearing aids (AHA) users. A sample of thirty Children with Hearing Impairment, fifteen DHA users and fifteen AHA users, with age range from 8 to 13 years was selected by purposive sampling technique to participate in the study. Picture Articulation Test with the subjective assessment technique was used to assess the articulation of children from speech sample taken in response to picture stimuli. The results showed that both groups of children with DHA and AHA demonstrated the presence of articulation errors. In children using DHA the intelligibility was significantly better than that of AHA users. Significantly children using AHA presented phonetic and phonological errors, but no significant difference found in articulation among male and female children, children with mono aural and binaural hearing aid fittings, and children with different amplification periods. A detailed analysis of articulation with a larger sample of children using both types of hearing aids with more considerations of external and internal variables is recommended in future to further clarify the issue


1990 ◽  
Vol 21 (3) ◽  
pp. 147-150
Author(s):  
Ronald A. Wilde

A commercial noise dose meter was used to estimate the equivalent noise dose received through high-gain hearing aids worn in a school for deaf children. There were no significant differences among nominal SSPL settings and all SSPL settings produced very high equivalent noise doses, although these are within the parameters of previous projections.


1988 ◽  
Vol 31 (2) ◽  
pp. 156-165 ◽  
Author(s):  
P. A. Busby ◽  
Y. C. Tong ◽  
G. M. Clark

The identification of consonants in a/-C-/a/nonsense syllables, using a fourteen-alternative forced-choice procedure, was examined in 4 profoundly hearing-impaired children under five conditions: audition alone using hearing aids in free-field (A),vision alone (V), auditory-visual using hearing aids in free-field (AV1), auditory-visual with linear amplification (AV2), and auditory-visual with syllabic compression (AV3). In the AV2 and AV3 conditions, acoustic signals were binaurally presented by magnetic or acoustic coupling to the subjects' hearing aids. The syllabic compressor had a compression ratio of 10:1, and attack and release times were 1.2 ms and 60 ms. The confusion matrices were subjected to two analysis methods: hierarchical clustering and information transmission analysis using articulatory features. The same general conclusions were drawn on the basis of results obtained from either analysis method. The results indicated better performance in the V condition than in the A condition. In the three AV conditions, the subjects predominately combined the acoustic parameter of voicing with the visual signal. No consistent differences were recorded across the three AV conditions. Syllabic compression did not, therefore, appear to have a significant influence on AV perception for these children. A high degree of subject variability was recorded for the A and three AV conditions, but not for the V condition.


Sign in / Sign up

Export Citation Format

Share Document