scholarly journals Streaming of repeated noise in primary and secondary fields of auditory cortex

2019 ◽  
Author(s):  
Daniela Saderi ◽  
Bradley N Buran ◽  
Stephen V David

Statistical regularities in natural sounds facilitate the perceptual segregation of auditory sources, or streams. Repetition is one cue that drives stream segregation in humans, but the neural basis of this perceptual phenomenon remains unknown. We demonstrated a similar perceptual ability in animals by training ferrets to detect a stream of repeating noise samples (foreground) embedded in a stream of random samples (background). During passive listening, we recorded neural activity in primary (A1) and secondary (PEG) fields of auditory cortex. We used two context-dependent encoding models to test for evidence of streaming of the repeating stimulus. The first was based on average evoked activity per noise sample and the second on the spectro-temporal receptive field (STRF). Both approaches tested whether changes in the neural response to repeating versus random stimuli were better modeled by scaling the response to both streams equally (global gain) or by separately scaling the response to the foreground versus background stream (stream-specific gain). Consistent with previous observations of adaptation, we found an overall reduction in global gain when the stimulus began to repeat. However, when we measured stream-specific changes in gain, responses to the foreground were enhanced relative to the background. This enhancement was stronger in PEG than A1. In A1, enhancement was strongest in units with low sparseness (i.e., broad sensory tuning) and with tuning selective for the repeated sample. Enhancement of responses to the foreground relative to the background provides evidence for stream segregation that emerges in A1 and is refined in PEG.

2012 ◽  
Vol 107 (9) ◽  
pp. 2366-2382 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Christophe Micheyl ◽  
Mitchell Steinschneider

The ability to detect and track relevant acoustic signals embedded in a background of other sounds is crucial for hearing in complex acoustic environments. This ability is exemplified by a perceptual phenomenon known as “rhythmic masking release” (RMR). To demonstrate RMR, a sequence of tones forming a target rhythm is intermingled with physically identical “Distracter” sounds that perceptually mask the rhythm. The rhythm can be “released from masking” by adding “Flanker” tones in adjacent frequency channels that are synchronous with the Distracters. RMR represents a special case of auditory stream segregation, whereby the target rhythm is perceptually segregated from the background of Distracters when they are accompanied by the synchronous Flankers. The neural basis of RMR is unknown. Previous studies suggest the involvement of primary auditory cortex (A1) in the perceptual organization of sound patterns. Here, we recorded neural responses to RMR sequences in A1 of awake monkeys in order to identify neural correlates and potential mechanisms of RMR. We also tested whether two current models of stream segregation, when applied to these responses, could account for the perceptual organization of RMR sequences. Results suggest a key role for suppression of Distracter-evoked responses by the simultaneous Flankers in the perceptual restoration of the target rhythm in RMR. Furthermore, predictions of stream segregation models paralleled the psychoacoustics of RMR in humans. These findings reinforce the view that preattentive or “primitive” aspects of auditory scene analysis may be explained by relatively basic neural mechanisms at the cortical level.


2005 ◽  
Vol 17 (4) ◽  
pp. 641-651 ◽  
Author(s):  
Rhodri Cusack

The structuring of the sensory scene (perceptual organization) profoundly affects what we perceive, and is of increasing clinical interest. In both vision and audition, many cues have been identified that influence perceptual organization, but only a little is known about its neural basis. Previous studies have suggested that auditory cortex may play a role in auditory perceptual organization (also called auditory stream segregation). However, these studies were limited in that they just examined auditory cortex and that the stimuli they used to generate different organizations had different physical characteristics, which per se may have led to the differences in neural response. In the current study, functional magnetic resonance imaging was used to test for an effect of perceptual organization across the whole brain. To avoid confounding physical changes to the stimuli with differences in perceptual organization, we exploited an ambiguous auditory figure that is sometimes perceived as a single auditory stream and sometimes as two streams. We found that regions in the intraparietal sulcus (IPS) showed greater activity when 2 streams were perceived rather than 1. The specific involvement of this region in perceptual organization is exciting, as there is a growing literature that suggests a role for the IPS in binding in vision, touch, and cross-modally. This evidence is discussed, and a general role proposed for regions of the IPS in structuring sensory input.


1985 ◽  
Vol 53 (4) ◽  
pp. 1109-1145 ◽  
Author(s):  
N. Suga ◽  
K. Tsuzuki

For echolocation the mustached bat, Pteronotus parnellii, emits complex orientation sounds (pulses), each consisting of four harmonics with long constant-frequency components (CF1-4) followed by short frequency-modulated components (FM1-4). The CF signals are best suited for target detection and measurement of target velocity. The CF/CF area of the auditory cortex of this species contains neurons sensitive to pulse-echo pairs. These CF/CF combination-sensitive neurons extract velocity information from Doppler-shifted echoes. In this study we electrophysiologically investigated the frequency tuning of CF/CF neurons for excitation, facilitation, and inhibition. CF1/CF2 and CF1/CF3 combination-sensitive neurons responded poorly to individual signal elements in pulse-echo pairs but showed strong facilitation of responses to pulse-echo pairs. The essential components in the pairs were CF1 of the pulse and CF2 or CF3 of the echo. In 68% of CF/CF neurons, the frequency-tuning curves for facilitation were extremely sharp for CF2 or CF3 and were "level-tolerant" so that the bandwidths of the tuning curves were less than 5.0% of best frequencies even at high stimulus levels. Facilitative tuning curves for CF1 were level tolerant only in 6% of the neurons studied. CF/CF neurons were specialized for fine analysis of the frequency relationship between two CF sounds regardless of sound pressure levels. Some CF/CF neurons responded to single-tone stimuli. Frequency-tuning curves for excitation (responses to single-tone stimuli) were extremely sharp and level tolerant for CF2 or CF3 in 59% of CF1/CF2 neurons and 70% of CF1/CF3 neurons. Tuning to CF1 was level tolerant in only 9% of these neurons. Sharp level-tolerant tuning may be the neural basis for small difference limens in frequency at high stimulus levels. Sharp level-tolerant tuning curves were sandwiched between broad inhibitory areas. Best frequencies for inhibition were slightly higher or lower than the best frequencies for facilitation and excitation. We thus conclude that sharp level-tolerant tuning curves are produced by inhibition. The extent to which neural sharpening occurred differed among groups of neurons tuned to different frequencies. The more important the frequency analysis of a particular component in biosonar signals, the more pronounced the neural sharpening. This was in addition to the peripheral specialization for fine frequency analysis of that component. The difference in bandwidth or quality factor between the excitatory tuning curves of peripheral neurons and the facilitative and excitatory tuning curves of CF/CF neurons was larger at higher stimulus levels.(ABSTRACT TRUNCATED AT 400 WORDS)


2010 ◽  
Vol 68 (2) ◽  
pp. 107-113 ◽  
Author(s):  
Kazuya Saitoh ◽  
Shinji Inagaki ◽  
Masataka Nishimura ◽  
Hideo Kawaguchi ◽  
Wen-Jie Song

Neuroreport ◽  
2004 ◽  
Vol 15 (9) ◽  
pp. 1511-1514 ◽  
Author(s):  
Susann Deike ◽  
Birgit Gaschler-Markefski ◽  
André Brechmann ◽  
Henning Scheich

NeuroImage ◽  
2019 ◽  
Vol 200 ◽  
pp. 242-249 ◽  
Author(s):  
Biao Han ◽  
Pim Mostert ◽  
Floris P. de Lange

2014 ◽  
Vol 112 (1) ◽  
pp. 81-94 ◽  
Author(s):  
Vanessa C. Miller-Sims ◽  
Sarah W. Bottjer

Like humans, songbirds learn vocal sounds from “tutors” during a sensitive period of development. Vocal learning in songbirds therefore provides a powerful model system for investigating neural mechanisms by which memories of learned vocal sounds are stored. This study examined whether NCM (caudo-medial nidopallium), a region of higher level auditory cortex in songbirds, serves as a locus where a neural memory of tutor sounds is acquired during early stages of vocal learning. NCM neurons respond well to complex auditory stimuli, and evoked activity in many NCM neurons habituates such that the response to a stimulus that is heard repeatedly decreases to approximately one-half its original level (stimulus-specific adaptation). The rate of neural habituation serves as an index of familiarity, being low for familiar sounds, but high for novel sounds. We found that response strength across different song stimuli was higher in NCM neurons of adult zebra finches than in juveniles, and that only adult NCM responded selectively to tutor song. The rate of habituation across both tutor song and novel conspecific songs was lower in adult than in juvenile NCM, indicating higher familiarity and a more persistent response to song stimuli in adults. In juvenile birds that have memorized tutor vocal sounds, neural habituation was higher for tutor song than for a familiar conspecific song. This unexpected result suggests that the response to tutor song in NCM at this age may be subject to top-down influences that maintain the tutor song as a salient stimulus, despite its high level of familiarity.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Daniela Saderi ◽  
Zachary P Schwartz ◽  
Charles R Heller ◽  
Jacob R Pennington ◽  
Stephen V David

Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.


Sign in / Sign up

Export Citation Format

Share Document