scholarly journals A Diversity Combination Model Incorporating an Inward Bias for Interaural Time-Level Difference Cue Integration in Sound Lateralization

2020 ◽  
Vol 10 (18) ◽  
pp. 6356
Author(s):  
Sina Mojtahedi ◽  
Engin Erzin ◽  
Pekcan Ungan

A sound source with non-zero azimuth leads to interaural time level differences (ITD and ILD). Studies on hearing system imply that these cues are encoded in different parts of the brain, but combined to produce a single lateralization percept as evidenced by experiments indicating trading between them. According to the duplex theory of sound lateralization, ITD and ILD play a more significant role in low-frequency and high-frequency stimulations, respectively. In this study, ITD and ILD, which were extracted from a generic head-related transfer functions, were imposed on a complex sound consisting of two low- and seven high-frequency tones. Two-alternative forced-choice behavioral tests were employed to assess the accuracy in identifying a change in lateralization. Based on a diversity combination model and using the error rate data obtained from the tests, the weights of the ITD and ILD cues in their integration were determined by incorporating a bias observed for inward shifts. The weights of the two cues were found to change with the azimuth of the sound source. While the ILD appears to be the optimal cue for the azimuths near the midline, the ITD and ILD weights turn to be balanced for the azimuths far from the midline.

2015 ◽  
Vol 114 (1) ◽  
pp. 531-539 ◽  
Author(s):  
Heath G. Jones ◽  
Andrew D. Brown ◽  
Kanthaiah Koka ◽  
Jennifer L. Thornton ◽  
Daniel J. Tollin

The century-old duplex theory of sound localization posits that low- and high-frequency sounds are localized with two different acoustical cues, interaural time and level differences (ITDs and ILDs), respectively. While behavioral studies in humans and behavioral and neurophysiological studies in a variety of animal models have largely supported the duplex theory, behavioral sensitivity to ILD is curiously invariant across the audible spectrum. Here we demonstrate that auditory midbrain neurons in the chinchilla ( Chinchilla lanigera) also encode ILDs in a frequency-invariant manner, efficiently representing the full range of acoustical ILDs experienced as a joint function of sound source frequency, azimuth, and distance. We further show, using Fisher information, that nominal “low-frequency” and “high-frequency” ILD-sensitive neural populations can discriminate ILD with similar acuity, yielding neural ILD discrimination thresholds for near-midline sources comparable to behavioral discrimination thresholds estimated for chinchillas. These findings thus suggest a revision to the duplex theory and reinforce ecological and efficiency principles that hold that neural systems have evolved to encode the spectrum of biologically relevant sensory signals to which they are naturally exposed.


2021 ◽  
Vol 263 (5) ◽  
pp. 1488-1496
Author(s):  
Yunqi Chen ◽  
Chuang Shi ◽  
Hao Mu

Earphones are commonly equipped with miniature loudspeaker units, which cannot transmit enough power of low-frequency sound. Meanwhile, there is often only one loudspeaker unit employed on each side of the earphone, whereby the multi-channel spatial audio processing cannot be applied. Therefore, the combined usage of the virtual bass (VB) and head-related transfer functions (HRTFs) is necessary for an immersive listening experience with earphones. However, the combining effect of the VB and HRTFs has not been comprehensively reported. The VB is developed based on the missing fundamental effect, providing that the presence of harmonics can be perceived as their fundamental frequency, even if the fundamental frequency is not presented. HRTFs describe the transmission process of a sound propagating from the sound source to human ears. Monaural audio processed by a pair of HRTFs can be perceived by the listener as a sound source located in the direction associated with the HRTFs. This paper carries out subjective listening tests and their results reveal that the harmonics required by the VB should be generated in the same direction as the high-frequency sound. The bass quality is rarely distorted by the presence of HRTFs, but the localization accuracy is occasionally degraded by the VB.


2016 ◽  
Vol 116 (6) ◽  
pp. 2497-2512 ◽  
Author(s):  
Anne Kösem ◽  
Anahita Basirat ◽  
Leila Azizi ◽  
Virginie van Wassenhove

During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.


2016 ◽  
Author(s):  
K. Kessler ◽  
R. A. Seymour ◽  
G. Rippon

AbstractAlthough atypical social behaviour remains a key characterisation of ASD, the presence of sensory and perceptual abnormalities has been given a more central role in recent classification changes. An understanding of the origins of such aberrations could thus prove a fruitful focus for ASD research. Early neurocognitive models of ASD suggested that the study of high frequency activity in the brain as a measure of cortical connectivity might provide the key to understanding the neural correlates of sensory and perceptual deviations in ASD. As our review shows, the findings from subsequent research have been inconsistent, with a lack of agreement about the nature of any high frequency disturbances in ASD brains. Based on the application of new techniques using more sophisticated measures of brain synchronisation, direction of information flow, and invoking the coupling between high and low frequency bands, we propose a framework which could reconcile apparently conflicting findings in this area and would be consistent both with emerging neurocognitive models of autism and with the heterogeneity of the condition.HighlightsSensory and perceptual aberrations are becoming a core feature of the ASD symptom prolife.Brain oscillations and functional connectivity are consistently affected in ASD.Relationships (coupling) between high and low frequencies are also deficient.Novel framework proposes the ASD brain is marked by local dysregulation and reduced top-down connectivityThe ASD brain’s ability to predict stimuli and events in the environment may be affectedThis may underlie perceptual sensitives and cascade into social processing deficits in ASD


2021 ◽  
Vol 91 (12) ◽  
pp. 2067
Author(s):  
Y. Katsnelson ◽  
А.В. Ильинский ◽  
Е.Б. Шадрин

A method of transcranial electromagnetic stimulation of the mammalian brain is proposed. The method is based on the interference of currents caused by high-frequency orthogonal oscillations of electric fields, which are modulated by low-frequency meander pulses. The effectiveness of the method was confirmed by the results of experiments on stimulating the brain of rats and rabbits.


2015 ◽  
Vol 27 (8) ◽  
pp. 1542-1551 ◽  
Author(s):  
Kristof Strijkers ◽  
Daisy Bertrand ◽  
Jonathan Grainger

We investigated how linguistic intention affects the time course of visual word recognition by comparing the brain's electrophysiological response to a word's lexical frequency, a well-established psycholinguistic marker of lexical access, when participants actively retrieve the meaning of the written input (semantic categorization) versus a situation where no language processing is necessary (ink color categorization). In the semantic task, the ERPs elicited by high-frequency words started to diverge from those elicited by low-frequency words as early as 120 msec after stimulus onset. On the other hand, when categorizing the colored font of the very same words in the color task, word frequency did not modulate ERPs until some 100 msec later (220 msec poststimulus onset) and did so for a shorter period and with a smaller scalp distribution. The results demonstrate that, although written words indeed elicit automatic recognition processes in the brain, the speed and quality of lexical processing critically depends on the top–down intention to engage in a linguistic task.


2015 ◽  
Vol 32 (10) ◽  
pp. 1915-1927 ◽  
Author(s):  
Sung Yong Kim

AbstractThis paper presents examples of the data quality assessment of surface radial velocity maps obtained from shore-based single and multiple high-frequency radars (HFRs) using statistical and dynamical approaches in a hindcast mode. Since a single radial velocity map contains partial information regarding a true current field, archived radial velocity data embed geophysical signals, such as tides, wind stress, and near-inertial and low-frequency variance. The spatial consistency of the geophysical signals and their dynamic relationships with driving forces are used to conduct the quality assurance and quality control of radial velocity data. For instance, spatial coherence, tidal amplitudes and phases, and wind-radial transfer functions are used to identify a spurious range and azimuthal bin. The uncertainty and signal-to-noise ratio of radial data are estimated with the standard deviation and cross correlation of paired radials sampled at nearby grid points that belong to two different radars. This review paper can benefit HFR users and operators and those who are interested in analyzing HFR-derived surface radial velocity data.


2020 ◽  
Author(s):  
Andrew J Quinn ◽  
Gary GR Green ◽  
Mark Hymers

AbstractThe spatial and spectral structure of oscillatory networks in the brain provide a readout of the underlying neuronal function. Within and between subject variability in these networks can be highly informative but also poses a considerable analytic challenge. Here, we describe a method that simultaneously estimate spectral and spatial network structure without assumptions about either feature distorting estimation of the other. This enables analyses exploring how variability in the frequency and spatial structure of oscillatory networks might vary both across the brain and across individuals. The method performs a modal decomposition of an autoregressive model to describe the oscillatory signals present within a time-series based on their peak frequency and damping time. Moreover, an alternate mathematical formulation for the system transfer function can be written in terms of these oscillatory modes; describing the spatial topography and network structure of each component. We define a set of Spatio-Spectral Eigenmodes (SSEs) from these parameters to provide a parsimonious description of oscillatory networks. Crucially, the SSEs preserve the rich between-subject variability and are constructed without pre-averaging within specified frequency bands or limiting analyses to single channels or regions. After validating the method on simulated data, we explore the structure of whole brain oscillatory networks in eyes-open resting state MEG data from the Human Connectome Project. We are able to show a wide variability in peak frequency and network structure of alpha oscillations and reveal a distinction between occipital ‘high-frequency alpha’ and parietal ‘low-frequency alpha’. The frequency difference between occipital and parietal alpha components is present within individual participants but is partially masked by larger between subject variability; a 10Hz oscillation may represent the high-frequency occipital component in one participant and the low-frequency parietal component in another. This rich characterisation of individual neural phenotypes has the potential to enhance analyses into the relationship between neural dynamics and a person’s behavioural, cognitive or clinical state


2006 ◽  
Vol 3 (9) ◽  
pp. 561-571 ◽  
Author(s):  
Wei Dong ◽  
Nigel P Cooper

The active and nonlinear mechanical processing of sound that takes place in the mammalian cochlea is fundamental to our sense of hearing. We have investigated the effects of opening the cochlea in order to make experimental observations of this processing. Using an optically transparent window that permits laser interferometric access to the apical turn of the guinea-pig cochlea, we show that the acousto-mechanical transfer functions of the sealed (i.e. near intact) cochlea are considerably simpler than those of the unsealed cochlea. Comparison of our results with those of others suggests that most previous investigations of apical cochlear mechanics have been made under unsealed conditions, and are therefore likely to have misrepresented the filtering of low-frequency sounds in the cochlea. The mechanical filtering that is apparent in the apical turns of sealed cochleae also differs from the filtering seen in individual auditory nerve fibres with similar characteristic frequencies. As previous studies have shown the neural and mechanical tuning of the basal cochlea to be almost identical, we conclude that the strategies used to process low frequency sounds in the apical turns of the cochlea might differ fundamentally from those used to process high frequency sounds in the basal turns.


Loquens ◽  
2019 ◽  
Vol 5 (2) ◽  
pp. 054
Author(s):  
María Cuesta ◽  
Pedro Cobo

Although tinnitus, the conscious perception of a sound without a sound source external or internal to the body, is highly correlated with hearing loss, the precise nature of such correlation remains still unknown. People with high pitch tinnitus are used to suffer from high frequency hearing losses, and vice versa, low pitch tinnitus is mostly associated with low frequency hearing losses. However, many subjects with low or high frequency losses do no develop tinnitus. Thus, studies trying to relate audiometric characteristics and tinnitus features are still relevant. This article presents a correlational study of audiometric and tinnitus variables in a sample of 34 subjects, paying special attention to the heterogeneous subtypes of both audiometry shape and tinnitus etiology. Our results, which concur with others previously published, demonstrate that the tinnitus pitch, the main frequency of the tinnitus spectrum, in subjects with high-steep high-frequency and continuously steep hearing losses, are highly correlated with the frequency at which hearing loss reaches 50 dB HL.


Sign in / Sign up

Export Citation Format

Share Document