Prediction of speech recognition in background noise and competing speech from suprathreshold auditory and cognitive measures

2021 ◽  
Vol 150 (4) ◽  
pp. A305-A305
Author(s):  
Jonathan H. Venezia ◽  
Nicole Whittle ◽  
Christian Herrera Ortiz ◽  
Marjorie R. Leek ◽  
Caleb Barcenas ◽  
...  
2021 ◽  
Vol 150 (4) ◽  
pp. A276-A276
Author(s):  
Nicole Whittle ◽  
Christian Herrera Ortiz ◽  
Marjorie R. Leek ◽  
Jerome Heidrich ◽  
Mark Jenkins ◽  
...  

2008 ◽  
Vol 18 (1) ◽  
pp. 19-24
Author(s):  
Erin C. Schafer

Children who use cochlear implants experience significant difficulty hearing speech in the presence of background noise, such as in the classroom. To address these difficulties, audiologists often recommend frequency-modulated (FM) systems for children with cochlear implants. The purpose of this article is to examine current empirical research in the area of FM systems and cochlear implants. Discussion topics will include selecting the optimal type of FM receiver, benefits of binaural FM-system input, importance of DAI receiver-gain settings, and effects of speech-processor programming on speech recognition. FM systems significantly improve the signal-to-noise ratio at the child's ear through the use of three types of FM receivers: mounted speakers, desktop speakers, or direct-audio input (DAI). This discussion will aid audiologists in making evidence-based recommendations for children using cochlear implants and FM systems.


Author(s):  
Lery Sakti Ramba

The purpose of this research is to design home automation system that can be controlled using voice commands. This research was conducted by studying other research related to the topics in this research, discussing with competent parties, designing systems, testing systems, and conducting analyzes based on tests that have been done. In this research voice recognition system was designed using Deep Learning Convolutional Neural Networks (DL-CNN). The CNN model that has been designed will then be trained to recognize several kinds of voice commands. The result of this research is a speech recognition system that can be used to control several electronic devices connected to the system. The speech recognition system in this research has a 100% success rate in room conditions with background intensity of 24dB (silent), 67.67% in room conditions with 42dB background noise intensity, and only 51.67% in room conditions with background intensity noise 52dB (noisy). The percentage of the success of the speech recognition system in this research is strongly influenced by the intensity of background noise in a room. Therefore, to obtain optimal results, the speech recognition system in this research is more suitable for use in rooms with low intensity background noise.


2018 ◽  
Vol 144 (5) ◽  
pp. EL417-EL422 ◽  
Author(s):  
Hartmut Meister ◽  
Sebastian Rählmann ◽  
Martin Walger

Author(s):  
Poonam Bansal ◽  
Amita Dev ◽  
Shail Jain

In this paper, a feature extraction method that is robust to additive background noise is proposed for automatic speech recognition. Since the background noise corrupts the autocorrelation coefficients of the speech signal mostly at the lower orders, while the higher-order autocorrelation coefficients are least affected, this method discards the lower order autocorrelation coefficients and uses only the higher-order autocorrelation coefficients for spectral estimation. The magnitude spectrum of the windowed higher-order autocorrelation sequence is used here as an estimate of the power spectrum of the speech signal. This power spectral estimate is processed further by the Mel filter bank; a log operation and the discrete cosine transform to get the cepstral coefficients. These cepstral coefficients are referred to as the Differentiated Relative Higher Order Autocorrelation Coefficient Sequence Spectrum (DRHOASS). The authors evaluate the speech recognition performance of the DRHOASS features and show that they perform as well as the MFCC features for clean speech and their recognition performance is better than the MFCC features for noisy speech.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yang Wenyi Liu ◽  
Bing Wang ◽  
Bing Chen ◽  
John J. Galvin ◽  
Qian-Jie Fu

AbstractMany tinnitus patients report difficulties understanding speech in noise or competing talkers, despite having “normal” hearing in terms of audiometric thresholds. The interference caused by tinnitus is more likely central in origin. Release from informational masking (more central in origin) produced by competing speech may further illuminate central interference due to tinnitus. In the present study, masked speech understanding was measured in normal hearing listeners with or without tinnitus. Speech recognition thresholds were measured for target speech in the presence of multi-talker babble or competing speech. For competing speech, speech recognition thresholds were measured for different cue conditions (i.e., with and without target-masker sex differences and/or with and without spatial cues). The present data suggest that tinnitus negatively affected masked speech recognition even in individuals with no measurable hearing loss. Tinnitus severity appeared to especially limit listeners’ ability to segregate competing speech using talker sex differences. The data suggest that increased informational masking via lexical interference may tax tinnitus patients’ central auditory processing resources.


Sign in / Sign up

Export Citation Format

Share Document