Acuity of Sound Localisation: A Topography of Auditory Space. I. Normal Hearing Conditions

Perception ◽  
1984 ◽  
Vol 13 (5) ◽  
pp. 581-600 ◽  
Author(s):  
Simon R Oldfield ◽  
Simon P A Parker

Eight subjects were required to localise a sound source (white noise through a speaker) which varied in position on both sides of the head over a range of elevations (-40° to +40°) and azimuths (0° to 180°) at 10° intervals. The perceived position of the source was indicated by pointing a special gun. Depression of the trigger activated a photographic system which recorded two views of the subject, the sound source, and the gun. The absolute and algebraic, azimuth and elevation errors were measured for all subjects at each position of the source. The variability of azimuth and elevation error was also computed. In a second experiment, four of the same subjects performed the same task but in this case visually located the sources. This experiment provided an estimate of inherent motor error in the pointing task. No differences in localisation acuity between sides were found, but there were significant differences between front and back regions. Azimuth and elevation error were well matched and low in the front. However, azimuth error increased in the regions behind the head, particularly for azimuth positions 120° to 160°. Larger increases were found for positions in the upper elevations of this region. Elevation error also increased in the upper elevations behind the head. A comparison of the auditory and visual data indicates that this pattern of error is not due to motor factors. The results are discussed in relation to the structural characteristics of the pinnae and modifications that they impose on incoming sound energy.

Perception ◽  
1984 ◽  
Vol 13 (5) ◽  
pp. 601-617 ◽  
Author(s):  
Simon R Oldfield ◽  
Simon P A Parker

The acuity of azimuth and elevation discrimination was measured under conditions in which the cues to localisation provided by the pinnae were removed. Four subjects localised a sound source (white noise through a speaker) which varied in position over a range of elevations (-40° to +40°) and azimuths (0° to 180°), at 10° intervals, on the left side of the head. Pinna cues were removed by the insertion of individually cast moulds in both pinnae. Each mould had an access hole to the auditory canal. The absolute and algebraic, azimuth and elevation errors were measured for all subjects at each position of the source. The variability of azimuth and elevation error was also computed. The performance of the subjects was compared to their performance under normal hearing conditions. Insertion of the pinnae moulds was found to increase substantially elevation error and the number of front/back reversals. The importance of the cues provided by the pinnae in these discriminations was thus confirmed. However, the increase in elevation error did not result in a corresponding increase in azimuth error. These findings provide support for the proposition that azimuth and elevation discrimination are coded independently.


2002 ◽  
Vol 87 (4) ◽  
pp. 1749-1762 ◽  
Author(s):  
Shigeto Furukawa ◽  
John C. Middlebrooks

Previous studies have demonstrated that the spike patterns of cortical neurons vary systematically as a function of sound-source location such that the response of a single neuron can signal the location of a sound source throughout 360° of azimuth. The present study examined specific features of spike patterns that might transmit information related to sound-source location. Analysis was based on responses of well-isolated single units recorded from cortical area A2 in α-chloralose-anesthetized cats. Stimuli were 80-ms noise bursts presented from loudspeakers in the horizontal plane; source azimuths ranged through 360° in 20° steps. Spike patterns were averaged across samples of eight trials. A competitive artificial neural network (ANN) identified sound-source locations by recognizing spike patterns; the ANN was trained using the learning vector quantization learning rule. The information about stimulus location that was transmitted by spike patterns was computed from joint stimulus-response probability matrices. Spike patterns were manipulated in various ways to isolate particular features. Full-spike patterns, which contained all spike-count information and spike timing with 100-μs precision, transmitted the most stimulus-related information. Transmitted information was sensitive to disruption of spike timing on a scale of more than ∼4 ms and was reduced by an average of ∼35% when spike-timing information was obliterated entirely. In a condition in which all but the first spike in each pattern were eliminated, transmitted information decreased by an average of only ∼11%. In many cases, that condition showed essentially no loss of transmitted information. Three unidimensional features were extracted from spike patterns. Of those features, spike latency transmitted ∼60% more information than that transmitted either by spike count or by a measure of latency dispersion. Information transmission by spike patterns recorded on single trials was substantially reduced compared with the information transmitted by averages of eight trials. In a comparison of averaged and nonaveraged responses, however, the information transmitted by latencies was reduced by only ∼29%, whereas information transmitted by spike counts was reduced by 79%. Spike counts clearly are sensitive to sound-source location and could transmit information about sound-source locations. Nevertheless, the present results demonstrate that the timing of the first poststimulus spike carries a substantial amount, probably the majority, of the location-related information present in spike patterns. The results indicate that any complete model of the cortical representation of auditory space must incorporate the temporal characteristics of neuronal response patterns.


2016 ◽  
Vol 41 (3) ◽  
pp. 437-447
Author(s):  
Dominik Storek ◽  
Frantisek Rund ◽  
Petr Marsalek

Abstract This paper analyses the performance of Differential Head-Related Transfer Function (DHRTF), an alternative transfer function for headphone-based virtual sound source positioning within a horizontal plane. This experimental one-channel function is used to reduce processing and avoid timbre affection while preserving signal features important for sound localisation. The use of positioning algorithm employing the DHRTF is compared to two other common positioning methods: amplitude panning and HRTF processing. Results of theoretical comparison and quality assessment of the methods by subjective listening tests are presented. The tests focus on distinctive aspects of the positioning methods: spatial impression, timbre affection, and loudness fluctuations. The results show that the DHRTF positioning method is applicable with very promising performance; it avoids perceptible channel coloration that occurs within the HRTF method, and it delivers spatial impression more successfully than the simple amplitude panning method.


1969 ◽  
Vol 12 (2) ◽  
pp. 351-361 ◽  
Author(s):  
Maurice I. Mendel ◽  
Robert Goldstein

The early components of the averaged electroencephalic response (AER) were examined at three-hour intervals in eight normal hearing adults over a single, sleepless 24-hour span. During each of the eight sessions, three series of clicks at 50 dB SL were presented to the right ear of the subject as he sat reading. 1024 clicks at the rate of 9.6/sec were used in obtaining each averaged response. Electroencephalic activity was recorded from an electrode on the vertex referred to the left earlobe. The response pattern was very stable, characterized by a polyphasic configuration with mean peak latencies of (P o ) 13.3 msec, (N a ) 22.0 msec, (P a ) 32.3 msec, and (N b ) 45.1 msec. An earlier negative peak (N o ) with a mean peak latency of 8.3 msec occurred in many of the responses. At the conclusion of the 24-hour span, three of the subjects were tested with the same stimuli during various stages of sleep. The early components of the AER remained consistent even during sleep. Threshold searches were successfully carried out on two of the sleeping subjects. The long-term stability of the early components of the AER in the awake and sleep states makes them practical as a response index for electroencephalic audiometry. Their characteristics are more compatible with a neurogenic than with a myogenic theory of their origin.


1979 ◽  
Vol 44 (3) ◽  
pp. 354-362 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Jonathan G. Leppler

Thirty-five normal-hearing listeners' speech discrimination scores were obtained for the California Consonant Test (CCT) in four noise competitors: (1) a four-talker complex (FT), (2) a nine-talker complex developed at Bowling Green State University (BGMTN), (3) cocktail party noise (CPN), and (4) white noise (WN). Five listeners received the CCT stimuli mixed ipsilaterally with each of the competing noises at one of seven different signal-to-noise ratios (S/Ns). Articulation functions were plotted for each noise competitor. Statistical analysis revealed that the noise types produced few differences on the CCT scores over most of the S/Ns tested, but that noise competitors similar to peripheral maskers (CPN and WN) had less effect on the scores at more severe levels than competitors more similar to perceptual maskers (FT and BGMTN). Results suggest that the CCT should be sufficiently difficult even without the presence of a noise competitor for normal-hearing listeners in many audiologic testing situations. Levels that should approximate CCT maximum discrimination (D-Max) scores for normal listeners are suggested for use when clinic time does not permit the establishment of articulation functions. The clinician should determine the S/N of the CCT tape itself before establishing listening levels.


Author(s):  
Lucas H. S. do Carmo ◽  
Ewerton C. Camargo ◽  
Alexandre N. Simos

Making use of theoretical approximations for the computation of the wave-induced slow-drift forces is a common procedure in the early stages of design of a new floating unit. They can help reducing the computational burden in two different fronts: for generating the QTFs in a frequency domain analysis, and during the subsequent execution of time-domain simulations. In a previous paper, we have discussed a simple procedure for making use of the white-noise approximation in FAST, without the need for any modification of the software. The proposal only requires restricting the computation of the QTFs to pairs of frequencies that are indeed essential to the slow-drift dynamics. For this, however, an additional assumption is made, considering that each motion is decoupled from those in the other dofs. In the present paper, a more detailed analysis of the subject is made, in order to clarify the theoretical aspects of the procedure and supplement the previous analysis. Once again, the results are based on the data available for the OC4 FOWT. The accuracy obtained with the procedure is discussed not only in terms of the resulting motions, but also comparing its effects on the second-order force spectra. A more detailed evaluation of the dynamic couplings is presented, and comparisons with the results obtained with Newman’s approximation are made in simulations involving waves only.


Author(s):  
Muath A. Obaidat ◽  
Joseph Brown

In recent years, blockchain has emerged as a popular data structure for use in software solutions. However, its meteoric rise has not been without criticism. Blockchain has been the subject of intense discussion in the field of cybersecurity because of its structural characteristics, mainly the permanency and decentralization. However, the blockchain technology in this field has also received intense scrutiny and caused to raise questions, such as, Is the application of blockchain in the field simply a localized trend or a bait for investors, both without a hope for permanent game-changing solutions? and Is blockchain an architecture that will lead to lasting disruptions in cybersecurity? This chapter aims to provide a neutral overview of why blockchain has risen as a popular pivot in cybersecurity, its current applications in this field, and an evaluation of what the future holds for this technology given both its limitations and advantages.


2019 ◽  
Vol 23 (03) ◽  
pp. e276-e280
Author(s):  
Gleide Viviani Maciel Almeida ◽  
Angela Ribas ◽  
Jorge Calleros

Introduction Even people with normal hearing may have difficulties locating a sound source in unfavorable sound environments where competitive noise is intense. Objective To develop, describe, validate and establish the normality curve of the sound localization test. Method The sample consisted of 100 healthy subjects with normal hearing, > 18 years old, who agreed to participate in the study. The sound localization test was applied after the subjects underwent a tonal audiometry exam. For this purpose, a calibrated free field test environment was set up. Then, 30 random pure tones were presented in 2 speakers placed at 45° (on the right and on the left sides of the subject), and the noise was presented from a 3rd speaker, placed at 180°. The noise was presented in 3 hearing situations: optimal listening condition (no noise), noise in relation to 0 dB, and noise in relation to - 10 dB. The subject was asked to point out the side where the pure tone was being perceived, even in the presence of noise. Results All of the 100 participants performed the test in an average time of 99 seconds. The average score was 21, the medium score was 23, and the standard deviation was 3.05. Conclusion The sound localization test proved to be easy to set-up and to apply. The results obtained in the validation of the test suggest that individuals with normal hearing should locate 70% of the presented stimuli. The test can constitute an important instrument in the measurement of noise interference in the ability to locate the sound.


2013 ◽  
Vol 133 (5) ◽  
pp. 2876-2882 ◽  
Author(s):  
William A. Yost ◽  
Louise Loiselle ◽  
Michael Dorman ◽  
Jason Burns ◽  
Christopher A. Brown

2018 ◽  
Vol 2018 ◽  
pp. 1-9
Author(s):  
Liang Xia ◽  
Jingchun He ◽  
Yuanyuan Sun ◽  
Yi Chen ◽  
Qiong Luo ◽  
...  

The acceptable noise level (ANL) was defined by subtracting the background noise level (BNL) from the most comfortable listening level (MCL) (ANL = MCL − BNL). This study compared the ANL obtained through different methods in 20 Chinese subjects with normal hearing. ANL was tested with Mandarin speech materials using a loudspeaker or earphones, with each subject tested by himself or by the audiologist. The presentation and response modes were as follows: (1) loudspeaker with self-adjusted noise levels using audiometer controls (LS method); (2) loudspeaker with the subject signaling the audiologist to adjust speech and noise levels (LA method); (3) earphones with self-adjusted noise levels using audiometer controls (ES method); and (4) earphones with the subject signaling the audiologist to adjust speech and noise levels (EA method). ANL was calculated from three measurements with each method. There was no significant difference in the ANL obtained through different presentation modes or response modes sound. The correlations between ANL, MCL, and BNL obtained from each two methods were significant. In conclusion, the ANL in normal-hearing Mandarin listeners may not be affected by presentation modes such as a loudspeaker or earphones nor is it affected by self-adjusted or audiologist-adjusted response modes. Earphone audiometry is as reliable as sound field audiometry and provides an easy and convenient way to measure ANL.


Sign in / Sign up

Export Citation Format

Share Document