scholarly journals Neural responses to natural and model-matched stimuli reveal distinct computations in primary and non-primary auditory cortex

2018 ◽  
Author(s):  
Sam V. Norman-Haignere ◽  
Josh H. McDermott

AbstractA central goal of sensory neuroscience is to construct models that can explain neural responses to complex, natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here we propose a simple alternative for testing a sensory model: we synthesize stimuli that yield the same model response as a natural stimulus, and test whether the natural and “model-matched” stimulus elicit the same neural response. We used this approach to test whether a common model of auditory cortex – in which spectrogram-like peripheral input is processed by linear spectrotemporal filters – can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect stimulus-driven correlations. We observed that fMRI voxel responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex, but that non-primary regions showed highly divergent responses to the two sound sets, suggesting that neurons in non-primary regions extract higher-order properties not made explicit by traditional models. This dissociation between primary and non-primary regions was not clear from model predictions due to the influence of stimulus-driven response correlations. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.Author SummaryModeling neural responses to natural stimuli is a core goal of sensory neuroscience. Here we propose a new approach for testing sensory models: we synthesize a “model-matched” stimulus that yields the same model response as a natural stimulus, and test whether it produces the same neural response. We used model-matching to test whether a standard model of auditory cortex can explain human cortical responses measured with fMRI. Model-matched stimuli produced nearly equivalent voxel responses in primary auditory cortex, but highly divergent responses in non-primary regions. This dissociation was not evident using more standard approaches for model testing, and suggests that non-primary regions compute higher-order stimulus properties not captured by traditional models. The methodology could be broadly applied in other domains.

2013 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Jihoon Oh ◽  
Jae Hyung Kwon ◽  
Po Song Yang ◽  
Jaeseung Jeong

Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.


2000 ◽  
Vol 84 (3) ◽  
pp. 1453-1463 ◽  
Author(s):  
Jos J. Eggermont

Responses of single- and multi-units in primary auditory cortex were recorded for gap-in-noise stimuli for different durations of the leading noise burst. Both firing rate and inter-spike interval representations were evaluated. The minimum detectable gap decreased in exponential fashion with the duration of the leading burst to reach an asymptote for durations of 100 ms. Despite the fact that leading and trailing noise bursts had the same frequency content, the dependence on leading burst duration was correlated with psychophysical estimates of across frequency channel (different frequency content of leading and trailing burst) gap thresholds in humans. The duration of the leading burst plus that of the gap was represented in the all-order inter-spike interval histograms for cortical neurons. The recovery functions for cortical neurons could be modeled on basis of fast synaptic depression and after-hyperpolarization produced by the onset response to the leading noise burst. This suggests that the minimum gap representation in the firing pattern of neurons in primary auditory cortex, and minimum gap detection in behavioral tasks is largely determined by properties intrinsic to those, or potentially subcortical, cells.


2020 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Fang Du ◽  
Ninglong Xu ◽  
Kai Wang ◽  
Chao Liang ◽  
Changhong Miao

2007 ◽  
Vol 98 (4) ◽  
pp. 2182-2195 ◽  
Author(s):  
Craig A. Atencio ◽  
David T. Blake ◽  
Fabrizio Strata ◽  
Steven W. Cheung ◽  
Michael M. Merzenich ◽  
...  

Many communication sounds, such as New World monkey twitter calls, contain frequency-modulated (FM) sweeps. To determine how this prominent vocalization element is represented in the auditory cortex we examined neural responses to logarithmic FM sweep stimuli in the primary auditory cortex (AI) of two awake owl monkeys. Using an implanted array of microelectrodes we quantitatively characterized neuronal responses to FM sweeps and to random tone-pip stimuli. Tone-pip responses were used to construct spectrotemporal receptive fields (STRFs). Classification of FM sweep responses revealed few neurons with high direction and speed selectivity. Most neurons responded to sweeps in both directions and over a broad range of sweep speeds. Characteristic frequency estimates from FM responses were highly correlated with estimates from STRFs, although spectral receptive field bandwidth was consistently underestimated by FM stimuli. Predictions of FM direction selectivity and best speed from STRFs were significantly correlated with observed FM responses, although some systematic discrepancies existed. Last, the population distributions of FM responses in the awake owl monkey were similar to, although of longer temporal duration than, those in the anesthetized squirrel monkeys.


2001 ◽  
Vol 86 (5) ◽  
pp. 2616-2620 ◽  
Author(s):  
Xiaoqin Wang ◽  
Siddhartha C. Kadia

A number of studies in various species have demonstrated that natural vocalizations generally produce stronger neural responses than do their time-reversed versions. The majority of neurons in the primary auditory cortex (A1) of marmoset monkeys responds more strongly to natural marmoset vocalizations than to the time-reversed vocalizations. However, it was unclear whether such differences in neural responses were simply due to the difference between the acoustic structures of natural and time-reversed vocalizations or whether they also resulted from the difference in behavioral relevance of both types of the stimuli. To address this issue, we have compared neural responses to natural and time-reversed marmoset twitter calls in A1 of cats with those obtained from A1 of marmosets using identical stimuli. It was found that the preference for natural marmoset twitter calls demonstrated in marmoset A1 was absent in cat A1. While both cortices responded approximately equally to time-reversed twitter calls, marmoset A1 responded much more strongly to natural twitter calls than did cat A1. This differential representation of marmoset vocalizations in two cortices suggests that experience-dependent and possibly species-specific mechanisms are involved in cortical processing of communication sounds.


PLoS ONE ◽  
2011 ◽  
Vol 6 (10) ◽  
pp. e25895 ◽  
Author(s):  
Chao Dong ◽  
Ling Qin ◽  
Yongchun Liu ◽  
Xinan Zhang ◽  
Yu Sato

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Agnès Landemard ◽  
Célian Bimbard ◽  
Charlie Demené ◽  
Shihab Shamma ◽  
Sam Norman-Haignere ◽  
...  

Little is known about how neural representations of natural sounds differ across species. For example, speech and music play a unique role in human hearing, yet it is unclear how auditory representations of speech and music differ between humans and other animals. Using functional ultrasound imaging, we measured responses in ferrets to a set of natural and spectrotemporally matched synthetic sounds previously tested in humans. Ferrets showed similar lower-level frequency and modulation tuning to that observed in humans. But while humans showed substantially larger responses to natural vs. synthetic speech and music in non-primary regions, ferret responses to natural and synthetic sounds were closely matched throughout primary and non-primary auditory cortex, even when tested with ferret vocalizations. This finding reveals that auditory representations in humans and ferrets diverge sharply at late stages of cortical processing, potentially driven by higher-order processing demands in speech and music.


Sign in / Sign up

Export Citation Format

Share Document