Auditory Perception of Complex Sounds

1990 ◽  
Author(s):  
Ira J. Hirsh
2019 ◽  
Author(s):  
Steven Losorelli ◽  
Blair Kaneshiro ◽  
Gabriella A. Musacchia ◽  
Nikolas H. Blevins ◽  
Matthew B. Fitzgerald

AbstractThe ability to differentiate complex sounds is essential for communication. Here, we propose using a machine-learning approach, called classification, to objectively evaluate auditory perception. In this study, we recorded frequency following responses (FFRs) from 13 normal-hearing adult participants to six short music and speech stimuli sharing similar fundamental frequencies but varying in overall spectral and temporal characteristics. Each participant completed a perceptual identification test using the same stimuli. We used linear discriminant analysis to classify FFRs. Results showed statistically significant FFR classification accuracies using both the full response epoch in the time domain (72.3% accuracy, p < 0.001) as well as real and imaginary Fourier coefficients up to 1 kHz (74.6%, p < 0.001). We classified decomposed versions of the responses in order to examine which response features contributed to successful decoding. Classifier accuracies using Fourier magnitude and phase alone in the same frequency range were lower but still significant (58.2% and 41.3% respectively, p < 0.001). Classification of overlapping 20-msec subsets of the FFR in the time domain similarly produced reduced but significant accuracies (42.3%–62.8%, p < 0.001). Participants’ mean perceptual responses were most accurate (90.6%, p < 0.001). Confusion matrices from FFR classifications and perceptual responses were converted to distance matrices and visualized as dendrograms. FFR classifications and perceptual responses demonstrate similar patterns of confusion across the stimuli. Our results demonstrate that classification can differentiate auditory stimuli from FFR responses with high accuracy. Moreover, the reduced accuracies obtained when the FFR is decomposed in the time and frequency domains suggest that different response features contribute complementary information, similar to how the human auditory system is thought to rely on both timing and frequency information to accurately process sound. Taken together, these results suggest that FFR classification is a promising approach for objective assessment of auditory perception.


2019 ◽  
Author(s):  
Jonathan Melchor ◽  
Isaac Morán ◽  
Tonatiuh Figueroa ◽  
Luis Lemus

AbstractThe ability to invariably identify spoken words and other naturalistic sounds in different temporal modulations and timbres requires perceptual tolerance to numerous acoustic variations. However, the mechanisms by which auditory information is perceived to be invariant are poorly understood, and no study has explicitly tested the perceptual constancy skills of nonhuman primates. We investigated the ability of two trained rhesus monkeys to learn and then recognize multiple sounds that included multisyllabic words. Importantly, we tested their ability to group unexperienced sounds into corresponding categories. We found that the monkeys adequately categorized sounds whose formants were at close Euclidean distance to the learned sounds. Our results indicate that macaques can attend and memorize complex sounds such as words. This ability was not studied or reported before and can be used to study the neuronal mechanisms underlying auditory perception.


1967 ◽  
Vol 10 (3) ◽  
pp. 438-448
Author(s):  
H. N. Wright

A binaural recording of traffic sounds that reached an artificial head oriented in five different positions was presented to five subjects, each of whom responded under four different criteria. The results showed that it is possible to examine the ability of listeners to localize sound while listening through earphones and that the criterion adopted by an individual listener is independent of his performance. For the experimental conditions used, the Type II ROC curve generated by manipulating criterion behavior was linear and consistent with a guessing model. Further experiments involving different degrees of stimulus degradation suggested a partial explanation for this finding and illustrated the various types of monaural and binaural cues used by normal and hearing-impaired listeners to localize complex sounds.


1973 ◽  
Vol 16 (3) ◽  
pp. 482-487 ◽  
Author(s):  
June D. Knafle

One hundred and eighty-nine kindergarten children were given a CVCC rhyming test which included four slightly different types of auditory differentiation. They obtained a greater number of correct scores on categories that provided maximum contrasts of final consonant sounds than they did on categories that provided less than maximum contrasts of final consonant sounds. For both sexes, significant differences were found between the categories; although the sex differences were not significant, girls made more correct rhyming responses than boys on the most difficult category.


Author(s):  
Rachel L. C. Mitchell ◽  
Rachel A. Kingston

It is now accepted that older adults have difficulty recognizing prosodic emotion cues, but it is not clear at what processing stage this ability breaks down. We manipulated the acoustic characteristics of tones in pitch, amplitude, and duration discrimination tasks to assess whether impaired basic auditory perception coexisted with our previously demonstrated age-related prosodic emotion perception impairment. It was found that pitch perception was particularly impaired in older adults, and that it displayed the strongest correlation with prosodic emotion discrimination. We conclude that an important cause of age-related impairment in prosodic emotion comprehension exists at the fundamental sensory level of processing.


1988 ◽  
Vol 33 (12) ◽  
pp. 1103-1103
Author(s):  
No authorship indicated

1991 ◽  
Vol 36 (10) ◽  
pp. 839-840
Author(s):  
William A. Yost
Keyword(s):  

10.2741/2666 ◽  
2008 ◽  
Vol 13 (13) ◽  
pp. 148 ◽  
Author(s):  
Valter Ciocca

Sign in / Sign up

Export Citation Format

Share Document