Thresholds for apparent auditory motion induced by linearly changing interaural time differences

1976 ◽  
Vol 60 (S1) ◽  
pp. S102-S102
Author(s):  
C. M. Brandauer ◽  
Wayne Ward
Perception ◽  
10.1068/p6369 ◽  
2009 ◽  
Vol 38 (9) ◽  
pp. 1377-1385
Author(s):  
Takahiro Kawabe

In this study, I examined how sequential stream segregation contributes to the detection of diotic tones among tones with time-varying interaural time differences (ITDs). Target (T) and distractor (D) tones, and a silent duration (–) formed a sequence (DTD–) and this sequence was presented repeatedly. A frequency difference was introduced between target and distractor tones. The distractor tones were also given time-varying ITDs to produce a percept of smooth auditory motion along the interaural axis. In half of the trials, the target tones were not given time-varying ITDs, and thus were diotically presented. The task of the listeners was to determine whether the repeated sequences of DTD–had target tones without motion. The sensitivity d′ for the detection of diotic target tones was higher with larger frequency differences. On the other hand, the criterion c was lower with larger frequency differences. In another session, I confirmed that proportions of reports “two streams” was positively and negatively correlated with d′ and c, respectively. The results indicate that the localisation of a sound image could be influenced by sequential stream segregation in complex sound environments.


1997 ◽  
Vol 101 (5) ◽  
pp. 3105-3105
Author(s):  
Hisashi Uematsu ◽  
Makio Kashino ◽  
Tatsuya Hirahara

2007 ◽  
Vol 45 (3) ◽  
pp. 523-530 ◽  
Author(s):  
A. Brooks ◽  
R. van der Zwan ◽  
A. Billard ◽  
B. Petreska ◽  
S. Clarke ◽  
...  

2010 ◽  
Vol 30 (35) ◽  
pp. 11696-11702 ◽  
Author(s):  
N. A. Lesica ◽  
A. Lingner ◽  
B. Grothe

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Antje Ihlefeld ◽  
Nima Alamatsaz ◽  
Robert M Shapley

Human sound localization is an important computation performed by the brain. Models of sound localization commonly assume that sound lateralization from interaural time differences is level invariant. Here we observe that two prevalent theories of sound localization make opposing predictions. The labelled-line model encodes location through tuned representations of spatial location and predicts that perceived direction is level invariant. In contrast, the hemispheric-difference model encodes location through spike-rate and predicts that perceived direction becomes medially biased at low sound levels. Here, behavioral experiments find that softer sounds are perceived closer to midline than louder sounds, favoring rate-coding models of human sound localization. Analogously, visual depth perception, which is based on interocular disparity, depends on the contrast of the target. The similar results in hearing and vision suggest that the brain may use a canonical computation of location: encoding perceived location through population spike rate relative to baseline.


Sign in / Sign up

Export Citation Format

Share Document