filtered speech
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 4)

H-INDEX

16
(FIVE YEARS 0)

2021 ◽  
Author(s):  
◽  
Megan Humphrey

<p>The present study used a Signal Detection approach to the study of prosody perception in children and adults who self-reported high levels of anxiety. Seventy-one children aged eight and nine years, and 85 adults listened to filtered speech and were required to discriminate angry, fearful and happy tones of voice. Anxiety levels were not associated with perception of affective prosody in adults. Levels of anxiety were related to children's criterion but not sensitivity to prosody. Highly anxious children were significantly more liberal in reporting fearful prosody compared to low anxious children. Analyses of total responses suggest that this criterion is reflective of an interpretation bias as opposed to a response bias. Given that the interpretation bias was observed in children and not adults, it is possible that the bias may mark a vulnerability to develop further anxiety. This is consistent with previous experimental findings in other modalities as well as integrative models of anxiety development that identify such cognitive biases as predisposing factors. Furthermore, regardless of anxiety level, children were comparable to adults in their accuracy for fearful prosody, yet were significantly poorer than adults in their accuracy for angry and happy prosody. This suggests that fear may be one of the first emotions children learn to identify.</p>


2021 ◽  
Author(s):  
◽  
Megan Humphrey

<p>The present study used a Signal Detection approach to the study of prosody perception in children and adults who self-reported high levels of anxiety. Seventy-one children aged eight and nine years, and 85 adults listened to filtered speech and were required to discriminate angry, fearful and happy tones of voice. Anxiety levels were not associated with perception of affective prosody in adults. Levels of anxiety were related to children's criterion but not sensitivity to prosody. Highly anxious children were significantly more liberal in reporting fearful prosody compared to low anxious children. Analyses of total responses suggest that this criterion is reflective of an interpretation bias as opposed to a response bias. Given that the interpretation bias was observed in children and not adults, it is possible that the bias may mark a vulnerability to develop further anxiety. This is consistent with previous experimental findings in other modalities as well as integrative models of anxiety development that identify such cognitive biases as predisposing factors. Furthermore, regardless of anxiety level, children were comparable to adults in their accuracy for fearful prosody, yet were significantly poorer than adults in their accuracy for angry and happy prosody. This suggests that fear may be one of the first emotions children learn to identify.</p>


Author(s):  
Dmitry I. Zabolotny ◽  
Irina A. Belyakova ◽  
Viktor I. Lutsenko ◽  
Tetiana Yu. Kholodenko ◽  
Tetyana P. Loza ◽  
...  

Topicality: Long-term and pronounced psychoemotional tension leads to negative changes in the human body. Many aspects of cochleovestibular changes caused by psychoemotional stress are not studied enough to date. Aim: to increase the diagnostic efficiency of auditory and vestibular disorders in patients of active working age after exposure to stress. Materials and methods: 95 patients of active working age with dizziness, which manifested under stress, and 10 persons of control group were studied. The following tests were performed to all patients: survey with the questionnaire "Comprehensive stress assessment", pure tone and speech audiometry, measurement with filtered speech discrimination tests, assessment of the auditory adaptation level, impedancemetry, registration of auditory brainstem responses (ABR), computed static posturography, vestibular testing. Results and discussion: All subjects were divided into three groups according to the results of the survey with the questionnaire "Comprehensive stress assessment": Group 1 included 21 patients with moderate stress, Group 2 included 35 persons with severe stress that could not be compensated and Group 3 included 39 persons under severe stress, moreover 10 of them were on the verge of exhaustion of adaptive capacities. 60 (63.2 %) patients had normal hearing. 24 (25.2 %) subjects had statistically significant (P <0.05) hearing impairment in the high frequency zone compared with the control group, a statistically significant difference in hearing impairment was detected in the entire frequency range in 11 (11.6 %) persons. Central auditory processing disorders were detected in more than half of patients (according to various tests – from 29 (30.5 %) to 49 (51.6 %) persons). Central vestibular syndrome of varying severity was diagnosed in all 95 patients. The most pronounced disorders of balance according to posturography have been reported in patients with severe stress with vision deprivation in the position with closed eyes. Conclusions: An integrated approach allowed to identify and select, besides traditional research methods, supplementary diagnostic measures for optimal assessment of cochleovestibular changes in patients of active working age after exposure to stress, to detect cochleovestibular disorders and differential topical diagnostics of disorders of the central or peripheral portions of the auditory and vestibular analyzers. These included psychological testing, a test battery to determine central auditory processing disorders – hearing adaptation with load, filtered speech discrimination tests, registration of ABR, registration of postural balance using the Wii Balance Board platform, vestibular testing.


2021 ◽  
Vol 3 (2) ◽  
pp. 1-20
Author(s):  
Xinchun Wang

Mandarin speakers’ productions of English sentences, spontaneous speech, and filtered speech were rated for degree of foreign accent by native English and Mandarin listeners. Results showed Mandarin speakers with 12 years' length of residence (LOR) in the U.S. were rated to be accented as those with zero LOR. Untrained native Mandarin listeners with no LOR in the target language environment were comparable to native English listeners in gauging degree of foreign accent based on sentences and spontaneous speech. No stimulus effect was found between sentences and spontaneous speech for accent rating. Filtered natural speech appeared to attenuate degree of foreign accent and Mandarin listeners were not able to assess foreign accent based on long excerpts of filtered speech. The findings suggest that LOR is not an important predictor of degree of foreign accent for adult speakers with late age of arrival (AOA).


2020 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Saradha Ananthakrishnan ◽  
Laura Grinstead ◽  
Danielle Yurjevich
Keyword(s):  

2020 ◽  
Vol 30 (Supplement_2) ◽  
Author(s):  
F Maia ◽  
V Jesus ◽  
C Mateus ◽  
S Paulo ◽  
L Marcelino ◽  
...  

Abstract Introduction One of the main hearing complaints is the difficulty to perceive speech in noisy environments. This complaint gets worse with ageing at which cognitive processing speed slows and/or when hearing loss is present. An auditory training can improve speech perception in adverse environments. The use of auditory training software on various digital platforms is becoming a reality. Objectives To validate the auditory training app in European Portuguese with individuals aged between 55 and 64 years old with an approximate average of 20dB of hearing thresholds. Methodology The sample consists in two groups of seven individuals without cognitive problems. One of the groups performed eight auditory training sessions with the application for a period of four weeks and the other group (the control group) did not perform any auditory training session. The evaluation was made in all the individuals with the filtered speech test. The training group was evaluated before, immediately after and after four weeks of the auditory training sessions and the control group was evaluated after four weeks. Results Statistically there is a significant difference between before and immediately after the auditory training in the filtered speech test (p = 0.018). Four weeks after the end of the training the performance of each individual was the same. In the control group there were no significant differences between the two evaluation moments. Conclusion The EVOLLU auditory training app promotes an improvement in the perception of the word in adverse environments that continues even after some time. This is a sign that individuals are applying that learning into their day to day living. The Evollu application can and should be used in the auditory training of individuals aged between 55 to 64 years old.


2020 ◽  
Vol 24 ◽  
pp. 233121652097563
Author(s):  
Christopher F. Hauth ◽  
Simon C. Berning ◽  
Birger Kollmeier ◽  
Thomas Brand

The equalization cancellation model is often used to predict the binaural masking level difference. Previously its application to speech in noise has required separate knowledge about the speech and noise signals to maximize the signal-to-noise ratio (SNR). Here, a novel, blind equalization cancellation model is introduced that can use the mixed signals. This approach does not require any assumptions about particular sound source directions. It uses different strategies for positive and negative SNRs, with the switching between the two steered by a blind decision stage utilizing modulation cues. The output of the model is a single-channel signal with enhanced SNR, which we analyzed using the speech intelligibility index to compare speech intelligibility predictions. In a first experiment, the model was tested on experimental data obtained in a scenario with spatially separated target and masker signals. Predicted speech recognition thresholds were in good agreement with measured speech recognition thresholds with a root mean square error less than 1 dB. A second experiment investigated signals at positive SNRs, which was achieved using time compressed and low-pass filtered speech. The results demonstrated that binaural unmasking of speech occurs at positive SNRs and that the modulation-based switching strategy can predict the experimental results.


2019 ◽  
Vol 40 (1) ◽  
pp. 3-17 ◽  
Author(s):  
Carina Pals ◽  
Anastasios Sarampalis ◽  
Mart van Dijk ◽  
Deniz Başkent

Sign in / Sign up

Export Citation Format

Share Document