scholarly journals The equivalent arc ratio for auditory space

2017 ◽  
Author(s):  
W. Owen Brimijoin

AbstractThe minimum audible movement angle increases as a function of source azimuth. If listeners do not perceptually compensate for this change in acuity, then sounds rotating around the head should appear to move faster at the front than at the side. We examined whether judgments of relative amounts of acoustic motion depend on signal center angle and found that the azimuth of two signals strongly affects their point of subjective similarity for motion. Signal motion centered at 90° had to be roughly twice as large as motion centered at 0° to be judged as equivalent. This distortion of acoustic space around the listener suggests that the perceived velocity of moving sound sources changes as a function of azimuth around the head. The “equivalent arc ratio,” a mathematical framework based on these results, is used to successfully provide quantitative explanations for previously documented discrepancies in spatial localization, motion perception, and head-to-world coordinate transformations.

2000 ◽  
Vol 83 (5) ◽  
pp. 2723-2739 ◽  
Author(s):  
Gregg H. Recanzone ◽  
Darren C. Guard ◽  
Mimi L. Phan ◽  
Tien-I K. Su

Lesion studies have indicated that the auditory cortex is crucial for the perception of acoustic space, yet it remains unclear how these neurons participate in this perception. To investigate this, we studied the responses of single neurons in the primary auditory cortex (AI) and the caudomedial field (CM) of two monkeys while they performed a sound-localization task. Regression analysis indicated that the responses of ∼80% of neurons in both cortical areas were significantly correlated with the azimuth or elevation of the stimulus, or both, which we term “spatially sensitive.” The proportion of spatially sensitive neurons was greater for stimulus azimuth compared with stimulus elevation, and elevation sensitivity was primarily restricted to neurons that were tested using stimuli that the monkeys also could localize in elevation. Most neurons responded best to contralateral speaker locations, but we also encountered neurons that responded best to ipsilateral locations and neurons that had their greatest responses restricted to a circumscribed region within the central 60° of frontal space. Comparing the spatially sensitive neurons with those that were not spatially sensitive indicated that these two populations could not be distinguished based on either the firing rate, the rate/level functions, or on their topographic location within AI. Direct comparisons between the responses of individual neurons and the behaviorally measured sound-localization ability indicated that proportionally more neurons in CM had spatial sensitivity that was consistent with the behavioral performance compared with AI neurons. Pooling the responses across neurons strengthened the relationship between the neuronal and psychophysical data and indicated that the responses pooled across relatively few CM neurons contain enough information to account for sound-localization ability. These data support the hypothesis that auditory space is processed in a serial manner from AI to CM in the primate cerebral cortex.


2005 ◽  
Vol 16 (06) ◽  
pp. 909-920 ◽  
Author(s):  
T. JANSE VAN RENSBURG ◽  
M. A. VAN WYK ◽  
W.-H. STEEB

Three-dimensional coordinate transformations are an essential part of the realistic visual display within a driving simulator. They are also used in other simulators such as flight simulators and for robotics. In this paper, the mathematical framework for implementing three-dimensional coordinate transformations is presented, provided with more detail for implementing it in a programming language such as C++. The realistic positioning of an observer for the "behind and above" view in a driving simulator will be discussed as an application of coordinate system transformations.


2013 ◽  
Vol 109 (4) ◽  
pp. 924-931 ◽  
Author(s):  
Caitlin S. Baxter ◽  
Brian S. Nelson ◽  
Terry T. Takahashi

Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls ( Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.


Author(s):  
Huakang Li ◽  
Akira Saji ◽  
Keita Tanno ◽  
Jun Ma ◽  
Jie Huang ◽  
...  

2021 ◽  
Vol 17 (8) ◽  
pp. e1009251
Author(s):  
Alex D. Reyes

In the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a space with elements that represent individual neurons and clusters of neurons. A mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The resulting outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show how both frequency and intensity can be encoded with a purely place code, without the need for rate or temporal coding schemes. The algebraic operations are used to describe loudness summation and suggest a mechanism for the critical band. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.


2020 ◽  
Author(s):  
Alex D. Reyes

AbstractIn the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a topological space with elements that represent individual neurons and clusters of neurons. A bijective mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The predicted outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show that acoustic information can be encoded with a purely place code, without the need for rate or temporal coding schemes. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.Author SummaryOne way of encoding sensory information in the brain is with a so-called place code. In the auditory system, tones of increasing frequencies activate sets of neurons at progressively different locations along an axis. The goal of this study is to elucidate the mathematical principles for representing tone frequency and intensity in neural networks. The rigorous, formal process ensures that the conditions for a place code and the associated computations are defined precisely. This mathematical approach offers new insights into experimental data and a framework for constructing network models.


2008 ◽  
Vol 20 (3) ◽  
pp. 603-635 ◽  
Author(s):  
Murat Aytekin ◽  
Cynthia F. Moss ◽  
Jonathan Z. Simon

Sound localization is known to be a complex phenomenon, combining multisensory information processing, experience-dependent plasticity, and movement. Here we present a sensorimotor model that addresses the question of how an organism could learn to localize sound sources without any a priori neural representation of its head-related transfer function or prior experience with auditory spatial information. We demonstrate quantitatively that the experience of the sensory consequences of its voluntary motor actions allows an organism to learn the spatial location of any sound source. Using examples from humans and echolocating bats, our model shows that a naive organism can learn the auditory space based solely on acoustic inputs and their relation to motor states.


2001 ◽  
Vol 86 (2) ◽  
pp. 1043-1046 ◽  
Author(s):  
Thomas D. Mrsic-Flogel ◽  
Andrew J. King ◽  
Rick L. Jenison ◽  
Jan W. H. Schnupp

The localization of sounds in space is based on spatial cues that arise from the acoustical properties of the head and external ears. Individual differences in localization cue values result from variability in the shape and dimensions of these structures. We have mapped spatial response fields of high-frequency neurons in ferret primary auditory cortex using virtual sound sources based either on the animal's own ears or on the ears of other subjects. For 73% of units, the response fields measured using the animals' own ears differed significantly in shape and/or position from those obtained using spatial cues from another ferret. The observed changes correlated with individual differences in the acoustics. These data are consistent with previous reports showing that humans localize less accurately when listening to virtual sounds from other individuals. Together these findings support the notion that neural mechanisms underlying auditory space perception are calibrated by experience to the properties of the individual.


Sign in / Sign up

Export Citation Format

Share Document