Where versus what: The relationship between speech understanding and auditory motion processing

2020 ◽  
Vol 148 (4) ◽  
pp. 2574-2574
Author(s):  
Michaela Warnecke ◽  
Ruth Y. Litovsky
2007 ◽  
Vol 45 (3) ◽  
pp. 523-530 ◽  
Author(s):  
A. Brooks ◽  
R. van der Zwan ◽  
A. Billard ◽  
B. Petreska ◽  
S. Clarke ◽  
...  

SLEEP ◽  
2019 ◽  
Vol 42 (7) ◽  
Author(s):  
Leonie Kirszenblat ◽  
Rebecca Yaun ◽  
Bruno van Swinderen

Abstract Sleep optimizes waking behavior, however, waking experience may also influence sleep. We used the fruit fly Drosophila melanogaster to investigate the relationship between visual experience and sleep in wild-type and mutant flies. We found that the classical visual mutant, optomotor-blind (omb), which has undeveloped horizontal system/vertical system (HS/VS) motion-processing cells and are defective in motion and visual salience perception, showed dramatically reduced and less consolidated sleep compared to wild-type flies. In contrast, optogenetic activation of the HS/VS motion-processing neurons in wild-type flies led to an increase in sleep following the activation, suggesting an increase in sleep pressure. Surprisingly, exposing wild-type flies to repetitive motion stimuli for extended periods did not increase sleep pressure. However, we observed that exposing flies to more complex image sequences from a movie led to more consolidated sleep, particularly when images were randomly shuffled through time. Our results suggest that specific forms of visual experience that involve motion circuits and complex, nonrepetitive imagery, drive sleep need in Drosophila.


2013 ◽  
Vol 109 (2) ◽  
pp. 321-331 ◽  
Author(s):  
David A. Magezi ◽  
Karin A. Buetler ◽  
Leila Chouiter ◽  
Jean-Marie Annoni ◽  
Lucas Spierer

Following prolonged exposure to adaptor sounds moving in a single direction, participants may perceive stationary-probe sounds as moving in the opposite direction [direction-selective auditory motion aftereffect (aMAE)] and be less sensitive to motion of any probe sounds that are actually moving (motion-sensitive aMAE). The neural mechanisms of aMAEs, and notably whether they are due to adaptation of direction-selective motion detectors, as found in vision, is presently unknown and would provide critical insight into auditory motion processing. We measured human behavioral responses and auditory evoked potentials to probe sounds following four types of moving-adaptor sounds: leftward and rightward unidirectional, bidirectional, and stationary. Behavioral data replicated both direction-selective and motion-sensitive aMAEs. Electrical neuroimaging analyses of auditory evoked potentials to stationary probes revealed no significant difference in either global field power (GFP) or scalp topography between leftward and rightward conditions, suggesting that aMAEs are not based on adaptation of direction-selective motion detectors. By contrast, the bidirectional and stationary conditions differed significantly in the stationary-probe GFP at 200 ms poststimulus onset without concomitant topographic modulation, indicative of a difference in the response strength between statistically indistinguishable intracranial generators. The magnitude of this GFP difference was positively correlated with the magnitude of the motion-sensitive aMAE, supporting the functional relevance of the neurophysiological measures. Electrical source estimations revealed that the GFP difference followed from a modulation of activity in predominantly right hemisphere frontal-temporal-parietal brain regions previously implicated in auditory motion processing. Our collective results suggest that auditory motion processing relies on motion-sensitive, but, in contrast to vision, non-direction-selective mechanisms.


2014 ◽  
Vol 14 (13) ◽  
pp. 4-4 ◽  
Author(s):  
F. Jiang ◽  
G. C. Stecker ◽  
I. Fine

2014 ◽  
Vol 40 (3) ◽  
pp. 265-272 ◽  
Author(s):  
L. B. Shestopalova ◽  
E. A. Petropavlovskaia ◽  
S. Ph. Vaitulevich ◽  
N. I. Nikitin

2018 ◽  
Author(s):  
Ceren Battal ◽  
Mohamed Rezk ◽  
Stefania Mattioni ◽  
Jyothirmayi Vadlamudi ◽  
Olivier Collignon

ABSTRACTThe ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is however poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to left, right, up and down moving as well as static sounds. Whole brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human Planum Temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were however significantly distinct. Altogether our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENTIn comparison to what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human Planum Temporale (hPT) and that they rely on partially shared pattern geometries. Our study therefore sheds important new lights on how computing the location or direction of sounds are implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a “preferred axis of motion” organization, reminiscent of the coding mechanisms typically observed in the occipital hMT+/V5 region for computing visual motion.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261295
Author(s):  
Florian Langner ◽  
Julie G. Arenberg ◽  
Andreas Büchner ◽  
Waldo Nogueira

Objectives The relationship between electrode-nerve interface (ENI) estimates and inter-subject differences in speech performance with sequential and simultaneous channel stimulation in adult cochlear implant listeners were explored. We investigated the hypothesis that individuals with good ENIs would perform better with simultaneous compared to sequential channel stimulation speech processing strategies than those estimated to have poor ENIs. Methods Fourteen postlingually deaf implanted cochlear implant users participated in the study. Speech understanding was assessed with a sentence test at signal-to-noise ratios that resulted in 50% performance for each user with the baseline strategy F120 Sequential. Two simultaneous stimulation strategies with either two (Paired) or three sets of virtual channels (Triplet) were tested at the same signal-to-noise ratio. ENI measures were estimated through: (I) voltage spread with electrical field imaging, (II) behavioral detection thresholds with focused stimulation, and (III) slope (IPG slope effect) and 50%-point differences (dB offset effect) of amplitude growth functions from electrically evoked compound action potentials with two interphase gaps. Results A significant effect of strategy on speech understanding performance was found, with Triplets showing a trend towards worse speech understanding performance than sequential stimulation. Focused thresholds correlated positively with the difference required to reach most comfortable level (MCL) between Sequential and Triplet strategies, an indirect measure of channel interaction. A significant offset effect (difference in dB between 50%-point for higher eCAP growth function slopes with two IPGs) was observed. No significant correlation was observed between the slopes for the two IPGs tested. None of the measures used in this study correlated with the differences in speech understanding scores between strategies. Conclusions The ENI measure based on behavioral focused thresholds could explain some of the difference in MCLs, but none of the ENI measures could explain the decrease in speech understanding with increasing pairs of simultaneously stimulated electrodes in processing strategies.


2018 ◽  
Vol 29 (10) ◽  
pp. 948-954 ◽  
Author(s):  
Paige Heeke ◽  
Andrew J. Vermiglio ◽  
Emery Bulla ◽  
Keerthana Velappan ◽  
Xiangming Fang

AbstractTemporal acoustic cues are particularly important for speech understanding, and past research has inferred a relationship between temporal resolution and speech recognition in noise ability. A temporal resolution disorder is thought to affect speech understanding abilities because persons would not be able to accurately encode these frequency transitions, creating speech discrimination errors even in the presence of normal pure-tone hearing.The primary purpose was to investigate the relationship between temporal resolution as measured by the Random Gap Detection Test (RGDT) and speech recognition in noise performance as measured by the Hearing in Noise Test (HINT) in adults with normal audiometric thresholds. The second purpose was to examine the relationship between temporal resolution and spatial release from masking.The HINT and RGDT protocols were administered under headphones according to the guidelines specified by the developers. The HINT uses an adaptive protocol to determine the signal-to-noise ratio where the participant recognizes 50% of the sentences. For HINT conditions, the target sentences were presented at 0° and the steady-state speech-shaped noise and a four-talker babble (4TB) was presented at 0°, +90°, or −90° for noise front, noise right, and noise left conditions, respectively. The RGDT is used to evaluate temporal resolution by determining the smallest time interval between two matching stimuli that can be detected by the participant. The RGDT threshold is the shortest time interval where the participant detects a gap. Tonal (0.5, 1, 2, and 4 kHz) and click stimuli random gap subtests were presented at 60 dB HL. Tonal subtests were presented in a random order to minimize presentation order effects.Twenty-one young, native English-speaking participants with normal pure-tone thresholds (≤25 dB HL for 500–4000 Hz) participated in this study. The average age of the participants was 20.2 years (SD = 0.66).Spearman rho correlation coefficients were conducted using SPSS 22 (IBM Corp., Armonk, NY) to determine the relationships between HINT and RGDT thresholds and derived measures (spatial advantage and composite scores). Nonparametric testing was used because of the ordinal nature of RGDT data.Moderate negative correlations (p < 0.05) were found between eight RGDT and HINT threshold measures and a moderate positive correlation (p < 0.05) was found between RGDT click thresholds and HINT 4TB spatial advantage. This suggests that as temporal resolution abilities worsened, speech recognition in noise performance improved. These correlations were not statistically significant after the p value reflected the Bonferroni correction for multiple comparisons.The results of the present study imply that the RGDT and HINT use different temporal processes. Performance on the RGDT cannot be predicted from HINT thresholds or vice versa.


Author(s):  
Alexandra A. Ludwig ◽  
Rudolf Rübsamen ◽  
Gerd J. Dörrscheidt ◽  
Sonja A. Kotz

Sign in / Sign up

Export Citation Format

Share Document