scholarly journals Encoding and Decoding Dynamic Sensory Signals with Recurrent Neural Networks: An Application of Conceptors to Birdsongs

2017 ◽  
Author(s):  
Richard Gast ◽  
Patrick Faion ◽  
Kai Standvoss ◽  
Andrea Suckro ◽  
Brian Lewis ◽  
...  

AbstractIn a constantly changing environment the brain has to make sense of dynamic patterns of sensory input. These patterns can refer to stimuli with a complex and hierarchical structure which has to be inferred from the neural activity of sensory areas in the brain. Such areas were found to be locally recurrently structured as well as hierarchically organized within a given sensory domain. While there is a great body of work identifying neural representations of various sensory stimuli at different hierarchical levels, less is known about the nature of these representations. In this work, we propose a model that describes a way to encode and decode sensory stimuli based on the activity patterns of multiple, recurrently connected neural populations with different receptive fields. We demonstrate the ability of our model to learn and recognize complex, dynamic stimuli using birdsongs as exemplary data. These birdsongs can be described by a 2-level hierarchical structure, i.e. as sequences of syllables. Our model matches this hierarchy by learning single syllables on a first level and sequences of these syllables on a top level. Model performance on recognition tasks is investigated for an increasing number of syllables or songs to recognize and compared to state-of-the-art machine learning approaches. Finally, we discuss the implications of our model for the understanding of sensory pattern processing in the brain. We conclude that the employed encoding and decoding mechanisms might capture general computational principles of how the brain extracts relevant information from the activity of recurrently connected neural populations.

2016 ◽  
Vol 113 (4) ◽  
pp. E420-E429 ◽  
Author(s):  
Mariam Aly ◽  
Nicholas B. Turk-Browne

Attention influences what is later remembered, but little is known about how this occurs in the brain. We hypothesized that behavioral goals modulate the attentional state of the hippocampus to prioritize goal-relevant aspects of experience for encoding. Participants viewed rooms with paintings, attending to room layouts or painting styles on different trials during high-resolution functional MRI. We identified template activity patterns in each hippocampal subfield that corresponded to the attentional state induced by each task. Participants then incidentally encoded new rooms with art while attending to the layout or painting style, and memory was subsequently tested. We found that when task-relevant information was better remembered, the hippocampus was more likely to have been in the correct attentional state during encoding. This effect was specific to the hippocampus, and not found in medial temporal lobe cortex, category-selective areas of the visual system, or elsewhere in the brain. These findings provide mechanistic insight into how attention transforms percepts into memories.


2021 ◽  
Vol 15 ◽  
Author(s):  
Philipp Weidel ◽  
Renato Duarte ◽  
Abigail Morrison

Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.


2020 ◽  
Author(s):  
Colin Bredenberg ◽  
Eero P. Simoncelli ◽  
Cristina Savin

AbstractNeural populations do not perfectly encode the sensory world: their capacity is limited by the number of neurons, metabolic and other biophysical resources, and intrinsic noise. The brain is presumably shaped by these limitations, improving efficiency by discarding some aspects of incoming sensory streams, while prefer-entially preserving commonly occurring, behaviorally-relevant information. Here we construct a stochastic recurrent neural circuit model that can learn efficient, task-specific sensory codes using a novel form of reward-modulated Hebbian synaptic plasticity. We illustrate the flexibility of the model by training an initially unstructured neural network to solve two different tasks: stimulus estimation, and stimulus discrimination. The network achieves high performance in both tasks by appropriately allocating resources and using its recurrent circuitry to best compensate for different levels of noise. We also show how the interaction between stimulus priors and task structure dictates the emergent network representations.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 35-35 ◽  
Author(s):  
M T Wallace

Multisensory integration in the superior colliculus (SC) of the cat requires a protracted postnatal developmental time course. Kittens 3 – 135 days postnatal (dpn) were examined and the first neuron capable of responding to two different sensory inputs (auditory and somatosensory) was not seen until 12 dpn. Visually responsive multisensory neurons were not encountered until 20 dpn. These early multisensory neurons responded weakly to sensory stimuli, had long response latencies, large receptive fields, and poorly developed response selectivities. Most striking, however, was their inability to integrate cross-modality cues in order to produce the significant response enhancement or depression characteristic of these neurons in adults. The incidence of multisensory neurons increased gradually over the next 10 – 12 weeks. During this period, sensory responses became more robust, latencies shortened, receptive fields decreased in size, and unimodal selectivities matured. The first neurons capable of cross-modality integration were seen at 28 dpn. For the following two months, the incidence of such integrative neurons rose gradually until adult-like values were achieved. Surprisingly, however, as soon as a multisensory neuron exhibited this capacity, most of its integrative features were indistinguishable from those in adults. Given what is known about the requirements for multisensory integration in adult animals, this observation suggests that the appearance of multisensory integration reflects the onset of functional corticotectal inputs.


2020 ◽  
Author(s):  
Tijl Grootswagers ◽  
Amanda K Robinson ◽  
Sophia M Shatek ◽  
Thomas A Carlson

The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/.


2008 ◽  
Vol 20 (9) ◽  
pp. 2325-2360 ◽  
Author(s):  
Rama Natarajan ◽  
Quentin J. M. Huys ◽  
Peter Dayan ◽  
Richard S. Zemel

Naturally occurring sensory stimuli are dynamic. In this letter, we consider how spiking neural populations might transmit information about continuous dynamic stimulus variables. The combination of simple encoders and temporal stimulus correlations leads to a code in which information is not readily available to downstream neurons. Here, we explore a complex encoder that is paired with a simple decoder that allows representation and manipulation of the dynamic information in neural systems. The encoder we present takes the form of a biologically plausible recurrent spiking neural network where the output population recodes its inputs to produce spikes that are independently decodeable. We show that this network can be learned in a supervised manner by a simple local learning rule.


2021 ◽  
Author(s):  
Lydia Barnes ◽  
Erin Goddard ◽  
Alexandra Woolgar

Every day, we respond to the dynamic world around us by flexibly choosing actions to meet our goals. This constant problem solving, in familiar settings and in novel tasks, is a defining feature of human behaviour. Flexible neural populations are thought to support this process by adapting to prioritise task-relevant information, driving coding in specialised brain regions toward stimuli and actions that are important for our goal. Accordingly, human fMRI shows that activity patterns in frontoparietal cortex contain more information about visual features when they are task-relevant. However, if this preferential coding drives momentary focus, for example to solve each part of a task, it must reconfigure more quickly than we can observe with fMRI. Here we used MVPA with MEG to test for rapid reconfiguration of stimulus information when a new feature becomes relevant within a trial. Participants saw two displays on each trial. They attended to the shape of a first target then the colour of a second, or vice versa, and reported the attended features at a choice display. We found evidence of preferential coding for the relevant features in both trial phases, even as participants shifted attention mid-trial, commensurate with fast sub-trial reconfiguration. However, we only found this pattern of results when the task was difficult, and the stimulus displays contained multiple objects, and not in a simpler task with the same structure. The data suggest that adaptive coding in humans can operate on a fast, sub-trial timescale, suitable for supporting periods of momentary focus when complex tasks are broken down into simpler ones, but may not always do so.


2017 ◽  
Vol 114 (22) ◽  
pp. 5725-5730 ◽  
Author(s):  
Victor Minces ◽  
Lucas Pinto ◽  
Yang Dan ◽  
Andrea A. Chiba

A primary function of the brain is to form representations of the sensory world. Its capacity to do so depends on the relationship between signal correlations, associated with neuronal receptive fields, and noise correlations, associated with neuronal response variability. It was recently shown that the behavioral relevance of sensory stimuli can modify the relationship between signal and noise correlations, presumably increasing the encoding capacity of the brain. In this work, we use data from the visual cortex of the awake mouse watching naturalistic stimuli and show that a similar modification is observed under heightened cholinergic modulation. Increasing cholinergic levels in the cortex through optogenetic stimulation of basal forebrain cholinergic neurons decreases the dependency that is commonly observed between signal and noise correlations. Simulations of correlated neural networks with realistic firing statistics indicate that this change in the correlation structure increases the encoding capacity of the network.


2020 ◽  
Vol 30 (10) ◽  
pp. 5583-5596 ◽  
Author(s):  
Yang Zhou ◽  
Yining Liu ◽  
Mingsha Zhang

Abstract Efficiently mapping sensory stimuli onto motor programs is crucial for rapidly choosing appropriate behavioral responses. While neuronal mechanisms underlying simple, one-to-one sensorimotor mapping have been extensively studied, how the brain achieves complex, many-to-one sensorimotor mapping remains unclear. Here, we recorded single neuron activity from the lateral intraparietal (LIP) cortex of monkeys trained to map multiple spatial positions of visual cue onto two opposite saccades. We found that LIP neurons’ activity was consistent with directly mapping multiple cue positions to the associated saccadic direction (SDir) regardless of whether the visual cue appeared in or outside neurons’ receptive fields. Unlike the explicit encoding of the visual categories, such cue–target mapping (CTM)–related activity covaried with the associated SDirs. Furthermore, the CTM was preferentially mediated by visual neurons identified by memory-guided saccade. These results indicate that LIP plays a crucial role in the early stage of many-to-one sensorimotor transformation.


Author(s):  
Caroline A. Miller ◽  
Laura L. Bruce

The first visual cortical axons arrive in the cat superior colliculus by the time of birth. Adultlike receptive fields develop slowly over several weeks following birth. The developing cortical axons go through a sequence of changes before acquiring their adultlike morphology and function. To determine how these axons interact with neurons in the colliculus, cortico-collicular axons were labeled with biocytin (an anterograde neuronal tracer) and studied with electron microscopy.Deeply anesthetized animals received 200-500 nl injections of biocytin (Sigma; 5% in phosphate buffer) in the lateral suprasylvian visual cortical area. After a 24 hr survival time, the animals were deeply anesthetized and perfused with 0.9% phosphate buffered saline followed by fixation with a solution of 1.25% glutaraldehyde and 1.0% paraformaldehyde in 0.1M phosphate buffer. The brain was sectioned transversely on a vibratome at 50 μm. The tissue was processed immediately to visualize the biocytin.


Sign in / Sign up

Export Citation Format

Share Document