scholarly journals Flexibility in motor timing constrains the topology and dynamics of pattern generator circuits

2015 ◽  
Author(s):  
Cengiz Pehlevan ◽  
Farhan Ali ◽  
Bence P. Ölveczky

SummaryTemporally precise movement patterns underlie many motor skills and innate actions, yet the flexibility with which the timing of such stereotyped behaviors can be modified is poorly understood. To probe this, we induced adaptive changes to the temporal structure of birdsong. We find that the duration of specific song segments can be modified without affecting the timing in other parts of the song. We derive formal prescriptions for how neural networks can implement such flexible motor timing. We find that randomly connected recurrent networks, a common approximation for how neocortex is wired, do not generally conform to these, though certain implementations can approximate them. We show that feedforward networks, by virtue of their one-to-one mapping between network activity and time, are better suited. Our study provides general prescriptions for pattern generator networks that implement flexible motor timing, an important aspect of many motor skills, including birdsong and human speech.

2020 ◽  
Vol 32 (1) ◽  
pp. 1-35 ◽  
Author(s):  
Shagun Sodhani ◽  
Sarath Chandar ◽  
Yoshua Bengio

Catastrophic forgetting and capacity saturation are the central challenges of any parametric lifelong learning system. In this work, we study these challenges in the context of sequential supervised learning with an emphasis on recurrent neural networks. To evaluate the models in the lifelong learning setting, we propose a curriculum-based, simple, and intuitive benchmark where the models are trained on tasks with increasing levels of difficulty. To measure the impact of catastrophic forgetting, the model is tested on all the previous tasks as it completes any task. As a step toward developing true lifelong learning systems, we unify gradient episodic memory (a catastrophic forgetting alleviation approach) and Net2Net (a capacity expansion approach). Both models are proposed in the context of feedforward networks, and we evaluate the feasibility of using them for recurrent networks. Evaluation on the proposed benchmark shows that the unified model is more suitable than the constituent models for lifelong learning setting.


2008 ◽  
Vol 17 (05) ◽  
pp. 963-984 ◽  
Author(s):  
CHUN-CHENG PENG ◽  
GEORGE D. MAGOULAS

Recurrent networks constitute an elegant way of increasing the capacity of feedforward networks to deal with complex data in the form of sequences of vectors. They are well known for their power to model temporal dependencies and process sequences for classification, recognition, and transduction. In this paper we propose an advanced nonmonotone Conjugate Gradient training algorithm for recurrent neural networks, which is equipped with an adaptive tuning strategy for both the nonmonotone learning horizon and the stepsize. Simulation results in sequence processing using three different recurrent architectures demonstrate that this modification of the Conjugate Gradient method is more effective than previous attempts.


2021 ◽  
Vol 3 (4) ◽  
pp. 316-323 ◽  
Author(s):  
Jason Z. Kim ◽  
Zhixin Lu ◽  
Erfan Nozari ◽  
George J. Pappas ◽  
Danielle S. Bassett

1994 ◽  
Vol 71 (1) ◽  
pp. 294-308 ◽  
Author(s):  
I. Ziv ◽  
D. A. Baxter ◽  
J. H. Byrne

1. We describe a simulator for neural networks and action potentials (SNNAP) that can simulate up to 30 neurons, each with up to 30 voltage-dependent conductances, 30 electrical synapses, and 30 multicomponent chemical synapses. Voltage-dependent conductances are described by Hodgkin-Huxley type equations, and the contributions of time-dependent synaptic conductances are described by second-order differential equations. The program also incorporates equations for simulating different types of neural modulation and synaptic plasticity. 2. Parameters, initial conditions, and output options for SNNAP are passed to the program through a number of modular ASCII files. These modules can be modified by commonly available text editors that use a conventional (i.e., character based) interface or by an editor incorporated into SNNAP that uses a graphical interface. The modular design facilitates the incorporation of existing modules into new simulations. Thus libraries can be developed of files describing distinctive cell types and files describing distinctive neural networks. 3. Several different types of neurons with distinct biophysical properties and firing properties were simulated by incorporating different combinations of voltage-dependent Na+, Ca2+, and K+ channels as well as Ca(2+)-activated and Ca(2+)-inactivated channels. Simulated cells included those that respond to depolarization with tonic firing, adaptive firing, or plateau potentials as well as endogenous pacemaker and bursting cells. 4. Several types of simple neural networks were simulated that included feed-forward excitatory and inhibitory chemical synaptic connections, a network of electrically coupled cells, and a network with feedback chemical synaptic connections that simulated rhythmic neural activity. In addition, with the use of the equations describing electrical coupling, current flow in a branched neuron with 18 compartments was simulated. 5. Enhancement of excitability and enhancement of transmitter release, produced by modulatory transmitters, were simulated by second-messenger-induced modulation of K+ currents. A depletion model for synaptic depression was also simulated. 6. We also attempted to simulate the features of a more complicated central pattern generator, inspired by the properties of neurons in the buccal ganglia of Aplysia. Dynamic changes in the activity of this central pattern generator were produced by a second-messenger-induced modulation of a slow inward current in one of the neurons.


2004 ◽  
Vol 213 ◽  
pp. 483-486
Author(s):  
David Brodrick ◽  
Douglas Taylor ◽  
Joachim Diederich

A recurrent neural network was trained to detect the time-frequency domain signature of narrowband radio signals against a background of astronomical noise. The objective was to investigate the use of recurrent networks for signal detection in the Search for Extra-Terrestrial Intelligence, though the problem is closely analogous to the detection of some classes of Radio Frequency Interference in radio astronomy.


2003 ◽  
Vol 15 (8) ◽  
pp. 1897-1929 ◽  
Author(s):  
Barbara Hammer ◽  
Peter Tiňo

Recent experimental studies indicate that recurrent neural networks initialized with “small” weights are inherently biased toward definite memory machines (Tiňno, Čerňanský, & Beňušková, 2002a, 2002b). This article establishes a theoretical counterpart: transition function of recurrent network with small weights and squashing activation function is a contraction. We prove that recurrent networks with contractive transition function can be approximated arbitrarily well on input sequences of unbounded length by a definite memory machine. Conversely, every definite memory machine can be simulated by a recurrent network with contractive transition function. Hence, initialization with small weights induces an architectural bias into learning with recurrent neural networks. This bias might have benefits from the point of view of statistical learning theory: it emphasizes one possible region of the weight space where generalization ability can be formally proved. It is well known that standard recurrent neural networks are not distribution independent learnable in the probably approximately correct (PAC) sense if arbitrary precision and inputs are considered. We prove that recurrent networks with contractive transition function with a fixed contraction parameter fulfill the so-called distribution independent uniform convergence of empirical distances property and hence, unlike general recurrent networks, are distribution independent PAC learnable.


2019 ◽  
Author(s):  
Hedyeh Rezaei ◽  
Ad Aertsen ◽  
Arvind Kumar ◽  
Alireza Valizadeh

AbstractTransient oscillations in the network activity upon sensory stimulation have been reported in different sensory areas. These evoked oscillations are the generic response of networks of excitatory and inhibitory neurons (EI-networks) to a transient external input. Recently, it has been shown that this resonance property of EI-networks can be exploited for communication in modular neuronal networks by enabling the transmission of sequences of synchronous spike volleys (‘pulse packets’), despite the sparse and weak connectivity between the modules. The condition for successful transmission is that the pulse packet (PP) intervals match the period of the modules’ resonance frequency. Hence, the mechanism was termed communication through resonance (CTR). This mechanism has three sever constraints, though. First, it needs periodic trains of PPs, whereas single PPs fail to propagate. Second, the inter-PP interval needs to match the network resonance. Third, transmission is very slow, because in each module, the network resonance needs to build-up over multiple oscillation cycles. Here, we show that, by adding appropriate feedback connections to the network, the CTR mechanism can be improved and the aforementioned constraints relaxed. Specifically, we show that adding feedback connections between two upstream modules, called the resonance pair, in an otherwise feedforward modular network can support successful propagation of a single PP throughout the entire network. The key condition for successful transmission is that the sum of the forward and backward delays in the resonance pair matches the resonance frequency of the network modules. The transmission is much faster, by more than a factor of two, than in the original CTR mechanism. Moreover, it distinctly lowers the threshold for successful communication by synchronous spiking in modular networks of weakly coupled networks. Thus, our results suggest a new functional role of bidirectional connectivity for the communication in cortical area networks.Author summaryThe cortex is organized as a modular system, with the modules (cortical areas) communicating via weak long-range connections. It has been suggested that the intrinsic resonance properties of population activities in these areas might contribute to enabling successful communication. A module’s intrinsic resonance appears in the damped oscillatory response to an incoming spike volley, enabling successful communication during the peaks of the oscillation. Such communication can be exploited in feedforward networks, provided the participating networks have similar resonance frequencies. This, however, is not necessarily true for cortical networks. Moreover, the communication is slow, as it takes several oscillation cycles to build up the response in the downstream network. Also, only periodic trains of spikes volleys (and not single volleys) with matching intervals can propagate. Here, we present a novel mechanism that alleviates these shortcomings and enables propagation of synchronous spiking across weakly connected networks with not necessarily identical resonance frequencies. In this framework, an individual spike volley can propagate by local amplification through reverberation in a loop between two successive networks, connected by feedforward and feedback connections: the resonance pair. This overcomes the need for activity build-up in downstream networks, causing the volley to propagate distinctly faster and more reliably.


2013 ◽  
Vol 25 (7) ◽  
pp. 1768-1806 ◽  
Author(s):  
N. Alex Cayco-Gajic ◽  
Eric Shea-Brown

Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarifying their origin and relationship in random, feedforward networks of McCullochs-Pitts neurons. Two key parameters are the thresholds at which neurons fire spikes and the overall level of feedforward connectivity. When neurons have low thresholds, we show that there is always a connectivity for which the properties in question all occur, that is, these networks preserve overall firing rates from layer to layer and produce broad distributions of activity in each layer. This fails to occur, however, when neurons have high thresholds. A key tool in explaining this difference is the eigenstructure of the resulting mean-field Markov chain, as this reveals which activity modes will be preserved from layer to layer. We extend our analysis from purely excitatory networks to more complex models that include inhibition and local noise, and find that both of these features extend the parameter ranges over which networks produce the properties of interest.


Author(s):  
Anjali Daisy

Neural networks are like the models of the brain and nervous system. It is highly parallel and processes information much more like the brain than a serial computer. It is very useful in learning information, using and executing very simple and complex behaviors, applications like powerful problem solvers and biological models. There are different types of neural networks like Biological, Feed Forward, Recurrent, and Elman. Biological Neural Networks require some biological data to predict information. In Feed Forward Networks, information flows in one way. In Recurrent Networks, information flows in multiple directions. Elman Networks feature Partial re-currency with a sense of time.


Sign in / Sign up

Export Citation Format

Share Document