scholarly journals A Behavioral Paradigm for Determining the Effect of Attention on the Activity of Single Units in the Monkey's Auditory Cortex

1974 ◽  
Vol 55 (2) ◽  
pp. 468-468
Author(s):  
H. Heffner ◽  
S. Hocherman ◽  
M. H. Goldstein
1999 ◽  
Vol 82 (3) ◽  
pp. 1542-1559 ◽  
Author(s):  
Michael Brosch ◽  
Andreas Schulz ◽  
Henning Scheich

It is well established that the tone-evoked response of neurons in auditory cortex can be attenuated if another tone is presented several hundred milliseconds before. The present study explores in detail a complementary phenomenon in which the tone-evoked response is enhanced by a preceding tone. Action potentials from multiunit groups and single units were recorded from primary and caudomedial auditory cortical fields in lightly anesthetized macaque monkeys. Stimuli were two suprathreshold tones of 100-ms duration, presented in succession. The frequency of the first tone and the stimulus onset asynchrony (SOA) between the two tones were varied systematically, whereas the second tone was fixed. Compared with presenting the second tone in isolation, the response to the second tone was enhanced significantly when it was preceded by the first tone. This was observed in 87 of 130 multiunit groups and in 29 of 69 single units with no obvious difference between different auditory fields. Response enhancement occurred for a wide range of SOA (110–329 ms) and for a wide range of frequencies of the first tone. Most of the first tones that enhanced the response to the second tone evoked responses themselves. The stimulus, which on average produced maximal enhancement, was a pair with a SOA of 120 ms and with a frequency separation of about one octave. The frequency/SOA combinations that induced response enhancement were mostly different from the ones that induced response attenuation. Results suggest that response enhancement, in addition to response attenuation, provides a basic neural mechanism involved in the cortical processing of the temporal structure of sounds.


2019 ◽  
Author(s):  
Ruiye Ni ◽  
David A. Bender ◽  
Dennis L. Barbour

AbstractThe ability to process speech signals under challenging listening environments is critical for speech perception. Great efforts have been made to reveal the underlying single unit encoding mechanism. However, big variability is usually discovered in single-unit responses, and the population coding mechanism is yet to be revealed. In this study, we are aimed to study how a population of neurons encodes behaviorally relevant signals subjective to change in intensity and signal-noise-ratio (SNR). We recorded single-unit activity from the primary auditory cortex of awake common marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise (WGN) and vocalization babble (Babble). By pooling all single units together, the pseudo-population analysis showed the population neural responses track intra- and inter-trajectory angle evolutions track vocalization identity and intensity/SNR, respectively. The ability of the trajectory to track the vocalizations attribute was degraded to a different degree by different noises. Discrimination of neural populations evaluated by neural response classifiers revealed that a finer optimal temporal resolution and longer time scale of temporal dynamics were needed for vocalizations in noise than vocalizations at multiple different intensities. The ability of population responses to discriminate between different vocalizations were mostly retained above the detection threshold.Significance StatementHow our brain excels in the challenge of precise acoustic signal encoding against noisy environment is of great interest for scientists. Relatively few studies have strived to tackle this mystery from the perspective of neural population responses. Population analysis reveals the underlying neural encoding mechanism of complex acoustic stimuli based upon a pool of single units via vector coding. We suggest the spatial population response vectors as one important way for neurons to integrate multiple attributes of natural acoustic signals, specifically, marmots’ vocalizations.


1998 ◽  
Vol 80 (2) ◽  
pp. 863-881 ◽  
Author(s):  
John C. Middlebrooks ◽  
Li Xu ◽  
Ann Clock Eddins ◽  
David M. Green

Middlebrooks, John C., Li Xu, Ann Clock Eddins, and David M. Green. Codes for sound-source location in nontonopic auditor cortex. J. Neurophysiol. 80: 863–881, 1998. We evaluated two hypothetical codes for sound-source location in the auditory cortex. The topographical code assumed that single neurons are selective for particular locations and that sound-source locations are coded by the cortical location of small populations of maximally activated neurons. The distributed code assumed that the responses of individual neurons can carry information about locations throughout 360° of azimuth and that accurate sound localization derives from information that is distributed across large populations of such panoramic neurons. We recorded from single units in the anterior ectosylvian sulcus area (area AES) and in area A2 of α-chloralose–anesthetized cats. Results obtained in the two areas were essentially equivalent. Noise bursts were presented from loudspeakers spaced in 20° intervals of azimuth throughout 360° of the horizontal plane. Spike counts of the majority of units were modulated >50% by changes in sound-source azimuth. Nevertheless, sound-source locations that produced greater than half-maximal spike counts often spanned >180° of azimuth. The spatial selectivity of units tended to broaden and, often, to shift in azimuth as sound pressure levels (SPLs) were increased to a moderate level. We sometimes saw systematic changes in spatial tuning along segments of electrode tracks as long as 1.5 mm but such progressions were not evident at higher sound levels. Moderate-level sounds presented anywhere in the contralateral hemifield produced greater than half-maximal activation of nearly all units. These results are not consistent with the hypothesis of a topographic code. We used an artificial-neural–network algorithm to recognize spike patterns and, thereby, infer the locations of sound sources. Network input consisted of spike density functions formed by averages of responses to eight stimulus repetitions. Information carried in the responses of single units permitted reasonable estimates of sound-source locations throughout 360° of azimuth. The most accurate units exhibited median errors in localization of <25°, meaning that the network output fell within 25° of the correct location on half of the trials. Spike patterns tended to vary with stimulus SPL, but level-invariant features of patterns permitted estimates of locations of sound sources that varied through 20-dB ranges. Sound localization based on spike patterns that preserved details of spike timing consistently was more accurate than localization based on spike counts alone. These results support the hypothesis that sound-source locations are represented by a distributed code and that individual neurons are, in effect, panoramic localizers.


2002 ◽  
Vol 7 (4) ◽  
pp. 214-227 ◽  
Author(s):  
Richard G. Rutkowski ◽  
Trevor M. Shackleton ◽  
Jan W.H. Schnupp ◽  
Mark N. Wallace ◽  
Alan R. Palmer

Sign in / Sign up

Export Citation Format

Share Document