scholarly journals Linking Normative Models of Natural Tasks to Descriptive Models of Neural Response

2017 ◽  
Author(s):  
Priyank Jaini ◽  
Johannes Burge

AbstractUnderstanding how nervous systems exploit task relevant properties of sensory stimuli to perform natural tasks is fundamental to the study of perceptual systems. However, there are few formal methods for determining which stimulus properties are most useful for a given task. As a consequence, it is difficult to develop principled models for how to compute task-relevant latent variables from natural signals, and it is difficult to evaluate descriptive models fit to neural response. Accuracy Maxmization Analysis (AMA) is a recently developed Bayesian method for finding the optimal task-specific filters (receptive fields). Here, we introduce AMA-Gauss, a new faster form of AMA that incorporates the assumption that the class-conditional filter responses are Gaussian distributed. Next, we use AMA-Gauss to show that its assumptions are justified for two fundamental visual tasks: retinal speed estimation and binocular disparity estimation. Then, we show that AMA-Gauss has striking formal similarities to popular quadratic models of neural response: the energy model and the Generalized Quadratic Model (GQM). Together, these developments deepen our understanding of why the energy model of neural response have proven useful, improve our ability to evaluate results from subunit model fits to neural data, and should help accelerate psychophysics and neuroscience research with natural stimuli.

2021 ◽  
Author(s):  
Nasim Winchester Vahidi

The mechanisms underlying how single auditory neurons and neuron populations encode natural and acoustically complex vocal signals, such as human speech or bird songs, are not well understood. Classical models focus on individual neurons, whose spike rates vary systematically as a function of change in a small number of simple acoustic dimensions. However, neurons in the caudal medial nidopallium (NCM), an auditory forebrain region in songbirds that is analogous to the secondary auditory cortex in mammals, have composite receptive fields (CRFs) that comprise multiple acoustic features tied to both increases and decreases in firing rates. Here, we investigated the anatomical organization and temporal activation patterns of auditory CRFs in European starlings exposed to natural vocal communication signals (songs). We recorded extracellular electrophysiological responses to various bird songs at auditory NCM sites, including both single and multiple neurons, and we then applied a quadratic model to extract large sets of CRF features that were tied to excitatory and suppressive responses at each measurement site. We found that the superset of CRF features yielded spatially and temporally distributed, generalizable representations of a conspecific song. Individual sites responded to acoustically diverse features, as there was no discernable organization of features across anatomically ordered sites. The CRF features at each site yielded broad, temporally distributed responses that spanned the entire duration of many starling songs, which can last for 50 s or more. Based on these results, we estimated that a nearly complete representation of any conspecific song, regardless of length, can be obtained by evaluating populations as small as 100 neurons. We conclude that natural acoustic communication signals drive a distributed yet highly redundant representation across the songbird auditory forebrain, in which adjacent neurons contribute to the encoding of multiple diverse and time-varying spectro-temporal features.


2020 ◽  
Vol 123 (3) ◽  
pp. 912-926
Author(s):  
Arkadeb Dutta ◽  
Tidhar Lev-Ari ◽  
Ouriel Barzilay ◽  
Rotem Mairon ◽  
Alon Wolf ◽  
...  

Segregation of objects from the background is a basic and essential property of the visual system. We studied the neural detection of objects defined by orientation difference from background in barn owls ( Tyto alba). We presented wide-field displays of densely packed stripes with a dominant orientation. Visual objects were created by orienting a circular patch differently from the background. In head-fixed conditions, neurons in both tecto- and thalamofugal visual pathways (optic tectum and visual Wulst) were weakly responsive to these objects in their receptive fields. However, notably, in freely viewing conditions, barn owls occasionally perform peculiar side-to-side head motions (peering) when scanning the environment. In the second part of the study we thus recorded the neural response from head-fixed owls while the visual displays replicated the peering conditions; i.e., the displays (objects and backgrounds) were shifted along trajectories that induced a retinal motion identical to sampled peering motions during viewing of a static object. These conditions induced dramatic neural responses to the objects, in the very same neurons that where unresponsive to the objects in static displays. By reverting to circular motions of the display, we show that the pattern of the neural response is mostly shaped by the orientation of the background relative to motion and not the orientation of the object. Thus our findings provide evidence that peering and/or other self-motions can facilitate orientation-based figure-ground segregation through interaction with inhibition from the surround. NEW & NOTEWORTHY Animals frequently move their sensory organs and thereby create motion cues that can enhance object segregation from background. We address a special example of such active sensing, in barn owls. When scanning the environment, barn owls occasionally perform small-amplitude side-to-side head movements called peering. We show that the visual outcome of such peering movements elicit neural detection of objects that are rotated from the dominant orientation of the background scene and which are otherwise mostly undetected. These results suggest a novel role for self-motions in sensing objects that break the regular orientation of elements in the scene.


2008 ◽  
Vol 99 (4) ◽  
pp. 1616-1627 ◽  
Author(s):  
Ben Scholl ◽  
Xiang Gao ◽  
Michael Wehr

Responses of cortical neurons to sensory stimuli within their receptive fields can be profoundly altered by the stimulus context. In visual and somatosensory cortex, contextual interactions have been shown to change sign from facilitation to suppression depending on stimulus strength. Contextual modulation of high-contrast stimuli tends to be suppressive, but for low-contrast stimuli tends to be facilitative. This trade-off may optimize contextual integration by cortical cells and has been suggested to be a general feature of cortical processing, but it remains unknown whether a similar phenomenon occurs in auditory cortex. Here we used whole cell and single-unit recordings to investigate how contextual interactions in auditory cortical neurons depend on the relative intensity of masker and probe stimuli in a two-tone stimulus paradigm. We tested the hypothesis that relatively low-level probes should show facilitation, whereas relatively high-level probes should show suppression. We found that contextual interactions were primarily suppressive across all probe levels, and that relatively low-level probes were subject to stronger suppression than high-level probes. These results were virtually identical for spiking and subthreshold responses. This suggests that, unlike visual cortical neurons, auditory cortical neurons show maximal suppression rather than facilitation for relatively weak stimuli.


2009 ◽  
Vol 102 (6) ◽  
pp. 3329-3339 ◽  
Author(s):  
Nima Mesgarani ◽  
Stephen V. David ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

Population responses of cortical neurons encode considerable details about sensory stimuli, and the encoded information is likely to change with stimulus context and behavioral conditions. The details of encoding are difficult to discern across large sets of single neuron data because of the complexity of naturally occurring stimulus features and cortical receptive fields. To overcome this problem, we used the method of stimulus reconstruction to study how complex sounds are encoded in primary auditory cortex (AI). This method uses a linear spectro-temporal model to map neural population responses to an estimate of the stimulus spectrogram, thereby enabling a direct comparison between the original stimulus and its reconstruction. By assessing the fidelity of such reconstructions from responses to modulated noise stimuli, we estimated the range over which AI neurons can faithfully encode spectro-temporal features. For stimuli containing statistical regularities (typical of those found in complex natural sounds), we found that knowledge of these regularities substantially improves reconstruction accuracy over reconstructions that do not take advantage of this prior knowledge. Finally, contrasting stimulus reconstructions under different behavioral states showed a novel view of the rapid changes in spectro-temporal response properties induced by attentional and motivational state.


2004 ◽  
Vol 16 (8) ◽  
pp. 1579-1600 ◽  
Author(s):  
Eric K. C. Tsang ◽  
Bertram E. Shi

The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. This letter describes an electronic implementation of a single binocularly tuned complex cell based on the binocular energy model, which has been proposed to model disparity-tuned complex cells in the mammalian primary visual cortex. Our system consists of two silicon retinas representing the left and right eyes, two silicon chips containing retinotopic arrays of spiking neurons with monocular Gabor-type spatial receptive fields, and logic circuits that combine the spike outputs to compute a disparity-selective complex cell response. The tuned disparity can be adjusted electronically by introducing either position or phase shifts between the monocular receptive field profiles. Mismatch between the monocular receptive field profiles caused by transistor mismatch can degrade the relative responses of neurons tuned to different disparities. In our system, the relative responses between neurons tuned by phase encoding are better matched than neurons tuned by position encoding. Our numerical sensitivity analysis indicates that the relative responses of phase-encoded neurons that are least sensitive to the receptive field parameters vary the most in our system. We conjecture that this robustness may be one reason for the existence of phase-encoded disparity-tuned neurons in biological neural systems.


2005 ◽  
Vol 94 (1) ◽  
pp. 788-798 ◽  
Author(s):  
Valerio Mante ◽  
Matteo Carandini

A recent optical imaging study of primary visual cortex (V1) by Basole, White, and Fitzpatrick demonstrated that maps of preferred orientation depend on the choice of stimuli used to measure them. These authors measured population responses expressed as a function of the optimal orientation of long drifting bars. They then varied bar length, direction, and speed and found that stimuli of a same orientation can elicit different population responses and stimuli with different orientation can elicit similar population responses. We asked whether these results can be explained from known properties of V1 receptive fields. We implemented an “energy model” where a receptive field integrates stimulus energy over a region of three-dimensional frequency space. The population of receptive fields defines a volume of visibility, which covers all orientations and a plausible range of spatial and temporal frequencies. This energy model correctly predicts the population response to bars of different length, direction, and speed and explains the observations made with optical imaging. The model also readily explains a related phenomenon, the appearance of motion streaks for fast-moving dots. We conclude that the energy model can be applied to activation maps of V1 and predicts phenomena that may otherwise appear to be surprising. These results indicate that maps obtained with optical imaging reflect the layout of neurons selective for stimulus energy, not for isolated stimulus features such as orientation, direction, and speed.


Author(s):  
Katarzyna Kordecka ◽  
Andrzej T. Foik ◽  
Agnieszka Wierzbicka ◽  
Wioletta J. Waleszczyk

AbstractRepetitive visual stimulation is successfully used in a study on the visual evoked potential (VEP) plasticity in the visual system in mammals. Practicing visual tasks or repeated exposure to sensory stimuli can induce neuronal network changes in the cortical circuits and improve the perception of these stimuli. However little is known about the effect of visual training at the subcortical level. In the present study, we extend the knowledge showing positive results of this training in the rat’s superior colliculus (SC). In electrophysiological experiments, we showed that a single training session lasting several hours induces a response enhancement both in the primary visual cortex (V1) and in the SC. Further, we tested if collicular responses will be enhanced without V1 input. For this reason, we inactivated the V1 by applying xylocaine solution onto the cortical surface during visual training. Our results revealed that SC’s response enhancement was present even without V1 inputs and showed no difference in amplitude comparing to VEPs enhancement while the V1 was active. These data suggest that the visual system plasticity and facilitation can develop independently but simultaneously in different parts of the visual system.


2017 ◽  
Vol 17 (10) ◽  
pp. 411
Author(s):  
Johannes Burge ◽  
Priyank Jaini

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 35-35 ◽  
Author(s):  
M T Wallace

Multisensory integration in the superior colliculus (SC) of the cat requires a protracted postnatal developmental time course. Kittens 3 – 135 days postnatal (dpn) were examined and the first neuron capable of responding to two different sensory inputs (auditory and somatosensory) was not seen until 12 dpn. Visually responsive multisensory neurons were not encountered until 20 dpn. These early multisensory neurons responded weakly to sensory stimuli, had long response latencies, large receptive fields, and poorly developed response selectivities. Most striking, however, was their inability to integrate cross-modality cues in order to produce the significant response enhancement or depression characteristic of these neurons in adults. The incidence of multisensory neurons increased gradually over the next 10 – 12 weeks. During this period, sensory responses became more robust, latencies shortened, receptive fields decreased in size, and unimodal selectivities matured. The first neurons capable of cross-modality integration were seen at 28 dpn. For the following two months, the incidence of such integrative neurons rose gradually until adult-like values were achieved. Surprisingly, however, as soon as a multisensory neuron exhibited this capacity, most of its integrative features were indistinguishable from those in adults. Given what is known about the requirements for multisensory integration in adult animals, this observation suggests that the appearance of multisensory integration reflects the onset of functional corticotectal inputs.


Sign in / Sign up

Export Citation Format

Share Document