scholarly journals Learning recurrent dynamics in spiking networks

2018 ◽  
Author(s):  
Christopher M. Kim ◽  
Carson C. Chow

AbstractSpiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity of a balanced network, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Christopher M Kim ◽  
Carson C Chow

Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity in a network of excitatory and inhibitory neurons respecting Dale’s law, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.


2007 ◽  
Vol 97 (6) ◽  
pp. 3859-3867 ◽  
Author(s):  
Hiroshi Okamoto ◽  
Yoshikazu Isomura ◽  
Masahiko Takada ◽  
Tomoki Fukai

Temporal integration of externally or internally driven information is required for a variety of cognitive processes. This computation is generally linked with graded rate changes in cortical neurons, which typically appear during a delay period of cognitive task in the prefrontal and other cortical areas. Here, we present a neural network model to produce graded (climbing or descending) neuronal activity. Model neurons are interconnected randomly by AMPA-receptor–mediated fast excitatory synapses and are subject to noisy background excitatory and inhibitory synaptic inputs. In each neuron, a prolonged afterdepolarizing potential follows every spike generation. Then, driven by an external input, the individual neurons display bimodal rate changes between a baseline state and an elevated firing state, with the latter being sustained by regenerated afterdepolarizing potentials. When the variance of background input and the uniform weight of recurrent synapses are adequately tuned, we show that stochastic noise and reverberating synaptic input organize these bimodal changes into a sequence that exhibits graded population activity with a nearly constant slope. To test the validity of the proposed mechanism, we analyzed the graded activity of anterior cingulate cortex neurons in monkeys performing delayed conditional Go/No-go discrimination tasks. The delay-period activities of cingulate neurons exhibited bimodal activity patterns and trial-to-trial variability that are similar to those predicted by the proposed model.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1701
Author(s):  
Theodor Panagiotakopoulos ◽  
Sotiris Kotsiantis ◽  
Georgios Kostopoulos ◽  
Omiros Iatrellis ◽  
Achilles Kameas

Over recent years, massive open online courses (MOOCs) have gained increasing popularity in the field of online education. Students with different needs and learning specificities are able to attend a wide range of specialized online courses offered by universities and educational institutions. As a result, large amounts of data regarding students’ demographic characteristics, activity patterns, and learning performances are generated and stored in institutional repositories on a daily basis. Unfortunately, a key issue in MOOCs is low completion rates, which directly affect student success. Therefore, it is of utmost importance for educational institutions and faculty members to find more effective practices and reduce non-completer ratios. In this context, the main purpose of the present study is to employ a plethora of state-of-the-art supervised machine learning algorithms for predicting student dropout in a MOOC for smart city professionals at an early stage. The experimental results show that accuracy exceeds 96% based on data collected during the first week of the course, thus enabling effective intervention strategies and support actions.


2021 ◽  
Vol 12 (6) ◽  
pp. 1-23
Author(s):  
Shuo Tao ◽  
Jingang Jiang ◽  
Defu Lian ◽  
Kai Zheng ◽  
Enhong Chen

Mobility prediction plays an important role in a wide range of location-based applications and services. However, there are three problems in the existing literature: (1) explicit high-order interactions of spatio-temporal features are not systemically modeled; (2) most existing algorithms place attention mechanisms on top of recurrent network, so they can not allow for full parallelism and are inferior to self-attention for capturing long-range dependence; (3) most literature does not make good use of long-term historical information and do not effectively model the long-term periodicity of users. To this end, we propose MoveNet and RLMoveNet. MoveNet is a self-attention-based sequential model, predicting each user’s next destination based on her most recent visits and historical trajectory. MoveNet first introduces a cross-based learning framework for modeling feature interactions. With self-attention on both the most recent visits and historical trajectory, MoveNet can use an attention mechanism to capture the user’s long-term regularity in a more efficient way. Based on MoveNet, to model long-term periodicity more effectively, we add the reinforcement learning layer and named RLMoveNet. RLMoveNet regards the human mobility prediction as a reinforcement learning problem, using the reinforcement learning layer as the regularization part to drive the model to pay attention to the behavior with periodic actions, which can help us make the algorithm more effective. We evaluate both of them with three real-world mobility datasets. MoveNet outperforms the state-of-the-art mobility predictor by around 10% in terms of accuracy, and simultaneously achieves faster convergence and over 4x training speedup. Moreover, RLMoveNet achieves higher prediction accuracy than MoveNet, which proves that modeling periodicity explicitly from the perspective of reinforcement learning is more effective.


2002 ◽  
Vol 205 (17) ◽  
pp. 2591-2603 ◽  
Author(s):  
Eric D. Tytell ◽  
George V. Lauder

SUMMARYThe fast-start escape response is the primary reflexive escape mechanism in a wide phylogenetic range of fishes. To add detail to previously reported novel muscle activity patterns during the escape response of the bichir, Polypterus, we analyzed escape kinematics and muscle activity patterns in Polypterus senegalus using high-speed video and electromyography (EMG). Five fish were filmed at 250 Hz while synchronously recording white muscle activity at five sites on both sides of the body simultaneously (10 sites in total). Body wave speed and center of mass velocity, acceleration and curvature were calculated from digitized outlines. Six EMG variables per channel were also measured to characterize the motor pattern. P. senegalus shows a wide range of activity patterns, from very strong responses, in which the head often touched the tail, to very weak responses. This variation in strength is significantly correlated with the stimulus and is mechanically driven by changes in stage 1 muscle activity duration. Besides these changes in duration, the stage 1 muscle activity is unusual because it has strong bilateral activity, although the observed contralateral activity is significantly weaker and shorter in duration than ipsilateral activity. Bilateral activity may stiffen the body, but it does so by a constant amount over the variation we observed; therefore, P. senegalus does not modulate fast-start wave speed by changing body stiffness. Escape responses almost always have stage 2 contralateral muscle activity, often only in the anterior third of the body. The magnitude of the stage 2 activity is the primary predictor of final escape velocity.


2021 ◽  
pp. 1-28
Author(s):  
ANURAJ SINGH ◽  
PREETI DEOLIA

In this paper, we study a discrete-time predator–prey model with Holling type-III functional response and harvesting in both species. A detailed bifurcation analysis, depending on some parameter, reveals a rich bifurcation structure, including transcritical bifurcation, flip bifurcation and Neimark–Sacker bifurcation. However, some sufficient conditions to guarantee the global asymptotic stability of the trivial fixed point and unique positive fixed points are also given. The existence of chaos in the sense of Li–Yorke has been established for the discrete system. The extensive numerical simulations are given to support the analytical findings. The system exhibits flip bifurcation and Neimark–Sacker bifurcation followed by wide range of dense chaos. Further, the chaos occurred in the system can be controlled by choosing suitable value of prey harvesting.


1995 ◽  
Vol 73 (5) ◽  
pp. 1876-1891 ◽  
Author(s):  
M. B. Calford ◽  
M. N. Semple

1. Several studies of auditory cortex have examined the competitive inhibition that can occur when appropriate sounds are presented to each ear. However, most cortical neurons also show both excitation and inhibition in response to presentation of stimuli at one ear alone. The extent of such inhibition has not been described. Forward masking, in which a variable masking stimulus was followed by a fixed probe stimulus (within the excitatory response area), was used to examine the extent of monaural inhibition for neurons in primary auditory cortex of anesthetized cats (barbiturate or barbiturate-ketamine). Both the masking and probe stimuli were 50-ms tone pips presented to the contralateral ear. Most cortical neurons showed significant forward masking at delays beyond which masking effects in the auditory nerve are relatively small compared with those seen in cortical neurons. Analysis was primarily concerned with such components. Standard rate-level functions were also obtained and were examined for nonmonotonicity, an indication of level-dependent monaural inhibition. 2. Consistent with previous reports, a wide range of frequency tuning properties (excitatory response area shapes) was found in cortical neurons. This was matched by a wide range of forward-masking-derived inhibitory response areas. At the most basic level of analysis, these were classified according to the presence of lateral inhibition, i.e., where a probe tone at a neuron's characteristic frequency was masked by tones outside the limits of the excitatory response area. Lateral inhibition was a property of 38% of the sampled neurons. Such neurons represented 77% of those with nonmonotonic rate-level functions, indicating a strong correlation between the two indexes of monaural inhibition; however, the shapes of forward masking inhibitory response areas did not usually correspond with those required to account for the "tuning" of a neuron. In addition, it was found that level-dependent inhibition was not added to by forward masking inhibition. 3. Analysis of the discharges to individual stimulus pair presentations, under conditions of partial masking, revealed that discharges to the probe occurred independently of discharges to the preceding masker. This indicates that even when the masker is within a neuron's excitatory response area, forward masking is not a postdischarge habituation phenomenon. However, for most neurons the degree of masking summed over multiple stimulus presentations appears determined by the same stimulus parameters that determine the probability of response to the masker.(ABSTRACT TRUNCATED AT 400 WORDS)


1991 ◽  
Vol 66 (4) ◽  
pp. 1156-1165 ◽  
Author(s):  
V. L. Smith-Swintosky ◽  
C. R. Plata-Salaman ◽  
T. R. Scott

1. Extracellular action potentials were recorded from 50 single neurons in the insular-opercular cortex of two alert cynomolgus monkeys during gustatory stimulation of the tongue and palate. 2. Sixteen stimuli, including salts, sugars, acids, alkaloids, monosodium glutamate, and aspartame, were chosen to represent a wide range of taste qualities. Concentrations were selected to elicit a moderate gustatory response, as determined by reference to previous electrophysiological data or to the human psychophysical literature. 3. The cortical region over which taste-evoked activity could be recorded included the frontal operculum and anterior insula, an area of approximately 75 mm3. Taste-responsive cells constituted 50 (2.7%) of the 1,863 neurons tested. Nongustatory cells responded to mouth movement (20.7%), somatosensory stimulation of the tongue (9.6%), stimulus approach or anticipation (1.7%), and tongue extension (0.6%). The sensitivities of 64.6% of these cortical neurons could not be identified by our stimulation techniques. 4. Taste cells had low spontaneous activity levels (3.7 +/- 3.0 spikes/s, mean +/- SD) and showed little inhibition. They were moderately broadly tuned, with a mean entropy coefficient of 0.76 +/- 0.17. Excitatory responses were typically not robust. 5. Hierarchical cluster analysis was used to determine whether neurons could be divided into discrete types, as defined by their response profiles to the entire stimulus array. There was an apparent division of response profiles into four general categories, with primary sensitivities to sodium (n = 18), glucose (n = 15), quinine (n = 12), and acid (n = 5). However, these categories were not statistically independent. Therefore the notion of functionally distinct neuron types was not supported by an analysis of the distribution of response profiles. It was the case, however, that neurons in the sodium category could be distinguished from other neurons by their relative specificity. 6. The similarity among the taste qualities represented by this stimulus array was assessed by calculating correlations between the activity profiles they elicited from these 50 neurons. The results generally confirmed expectations derived from human psychophysical studies. In a multidimensional representation of stimulus similarity, there were groups that contained acids, sodium salts, and chemicals that humans label bitter and sweet. 7. The small proportion of insular-opercular neurons that are taste sensitive and the low discharge rates that taste stimuli are able to evoke from them suggest a wider role for this cortical area than just gustatory coding.(ABSTRACT TRUNCATED AT 400 WORDS)


2017 ◽  
Vol 24 (3) ◽  
pp. 277-293 ◽  
Author(s):  
Selen Atasoy ◽  
Gustavo Deco ◽  
Morten L. Kringelbach ◽  
Joel Pearson

A fundamental characteristic of spontaneous brain activity is coherent oscillations covering a wide range of frequencies. Interestingly, these temporal oscillations are highly correlated among spatially distributed cortical areas forming structured correlation patterns known as the resting state networks, although the brain is never truly at “rest.” Here, we introduce the concept of harmonic brain modes—fundamental building blocks of complex spatiotemporal patterns of neural activity. We define these elementary harmonic brain modes as harmonic modes of structural connectivity; that is, connectome harmonics, yielding fully synchronous neural activity patterns with different frequency oscillations emerging on and constrained by the particular structure of the brain. Hence, this particular definition implicitly links the hitherto poorly understood dimensions of space and time in brain dynamics and its underlying anatomy. Further we show how harmonic brain modes can explain the relationship between neurophysiological, temporal, and network-level changes in the brain across different mental states ( wakefulness, sleep, anesthesia, psychedelic). Notably, when decoded as activation of connectome harmonics, spatial and temporal characteristics of neural activity naturally emerge from the interplay between excitation and inhibition and this critical relation fits the spatial, temporal, and neurophysiological changes associated with different mental states. Thus, the introduced framework of harmonic brain modes not only establishes a relation between the spatial structure of correlation patterns and temporal oscillations (linking space and time in brain dynamics), but also enables a new dimension of tools for understanding fundamental principles underlying brain dynamics in different states of consciousness.


2004 ◽  
Vol 92 (2) ◽  
pp. 959-976 ◽  
Author(s):  
Renaud Jolivet ◽  
Timothy J. Lewis ◽  
Wulfram Gerstner

We demonstrate that single-variable integrate-and-fire models can quantitatively capture the dynamics of a physiologically detailed model for fast-spiking cortical neurons. Through a systematic set of approximations, we reduce the conductance-based model to 2 variants of integrate-and-fire models. In the first variant (nonlinear integrate-and-fire model), parameters depend on the instantaneous membrane potential, whereas in the second variant, they depend on the time elapsed since the last spike [Spike Response Model (SRM)]. The direct reduction links features of the simple models to biophysical features of the full conductance-based model. To quantitatively test the predictive power of the SRM and of the nonlinear integrate-and-fire model, we compare spike trains in the simple models to those in the full conductance-based model when the models are subjected to identical randomly fluctuating input. For random current input, the simple models reproduce 70–80 percent of the spikes in the full model (with temporal precision of ±2 ms) over a wide range of firing frequencies. For random conductance injection, up to 73 percent of spikes are coincident. We also present a technique for numerically optimizing parameters in the SRM and the nonlinear integrate-and-fire model based on spike trains in the full conductance-based model. This technique can be used to tune simple models to reproduce spike trains of real neurons.


Sign in / Sign up

Export Citation Format

Share Document