scholarly journals Verification of Continuous Time Recurrent Neural Networks (Benchmark Proposal)

10.29007/6czp ◽  
2018 ◽  
Author(s):  
Patrick Musau ◽  
Taylor T. Johnson

This manuscript presents a description and implementation of two benchmark problems for continuous-time recurrent neural network (RNN) verification. The first problem deals with the approximation of a vector field for a fixed point attractor located at the origin, whereas the second problem deals with the system identification of a forced damped pendulum. While the verification of neural networks is complicated and often impenetrable to the majority of verification techniques, continuous-time RNNs represent a class of networks that may be accessible to reachability methods for nonlinear ordinary differential equations (ODEs) derived originally in biology and neuroscience. Thus, an understanding of the behavior of a RNN may be gained by simulating the nonlinear equations from a diverse set of initial conditions and inputs, or considering reachability analysis from a set of initial conditions. The verification of continuous-time RNNs is a research area that has received little attention and if the research community can achieve meaningful results in this domain, then this class of neural networks may prove to be a superior approach in solving complex problems compared to other network architectures.

1995 ◽  
Vol 06 (04) ◽  
pp. 463-472
Author(s):  
STEVE G. ROMANIUK

The ability to derive minimal network architectures for neural networks has been at the center of attention for several years now. To this date numerous algorithms have been proposed to automatically construct networks. Unfortunately, these algorithms lack a fundamental theoretical analysis of their capabilities and only empirical evaluations on a few selected benchmark problems exist. Some theoretical results have been provided for small classes of well-known benchmark problems such as parity- and encoder-functions, but these are of little value due to their restrictiveness. In this work we describe a general class of 2-layer networks with 2 hidden units capable of representing a large set of problems. The cardinality of this class grows exponentially with regard to the inputs N. Furthermore, we outline a simple algorithm that allows us to determine, if any function (problem) is a member of this class. The class considered in this paper includes the benchmark problems parity and symmetry. Finally, we expand this class to include an even larger set of functions and point out several interesting properties it exhibits.


2019 ◽  
Vol 2019 (1) ◽  
pp. 153-158
Author(s):  
Lindsay MacDonald

We investigated how well a multilayer neural network could implement the mapping between two trichromatic color spaces, specifically from camera R,G,B to tristimulus X,Y,Z. For training the network, a set of 800,000 synthetic reflectance spectra was generated. For testing the network, a set of 8,714 real reflectance spectra was collated from instrumental measurements on textiles, paints and natural materials. Various network architectures were tested, with both linear and sigmoidal activations. Results show that over 85% of all test samples had color errors of less than 1.0 ΔE2000 units, much more accurate than could be achieved by regression.


2021 ◽  
Vol 54 (4) ◽  
pp. 1-38
Author(s):  
Varsha S. Lalapura ◽  
J. Amudha ◽  
Hariramn Selvamuruga Satheesh

Recurrent Neural Networks are ubiquitous and pervasive in many artificial intelligence applications such as speech recognition, predictive healthcare, creative art, and so on. Although they provide accurate superior solutions, they pose a massive challenge “training havoc.” Current expansion of IoT demands intelligent models to be deployed at the edge. This is precisely to handle increasing model sizes and complex network architectures. Design efforts to meet these for greater performance have had inverse effects on portability on edge devices with real-time constraints of memory, latency, and energy. This article provides a detailed insight into various compression techniques widely disseminated in the deep learning regime. They have become key in mapping powerful RNNs onto resource-constrained devices. While compression of RNNs is the main focus of the survey, it also highlights challenges encountered while training. The training procedure directly influences model performance and compression alongside. Recent advancements to overcome the training challenges with their strengths and drawbacks are discussed. In short, the survey covers the three-step process, namely, architecture selection, efficient training process, and suitable compression technique applicable to a resource-constrained environment. It is thus one of the comprehensive survey guides a developer can adapt for a time-series problem context and an RNN solution for the edge.


2021 ◽  
Vol 47 (1) ◽  
Author(s):  
Fabian Laakmann ◽  
Philipp Petersen

AbstractWe demonstrate that deep neural networks with the ReLU activation function can efficiently approximate the solutions of various types of parametric linear transport equations. For non-smooth initial conditions, the solutions of these PDEs are high-dimensional and non-smooth. Therefore, approximation of these functions suffers from a curse of dimension. We demonstrate that through their inherent compositionality deep neural networks can resolve the characteristic flow underlying the transport equations and thereby allow approximation rates independent of the parameter dimension.


1999 ◽  
Vol 09 (10) ◽  
pp. 2105-2126 ◽  
Author(s):  
TAO YANG ◽  
LEON O. CHUA

Small-world phenomenon can occur in coupled dynamical systems which are highly clustered at a local level and yet strongly coupled at the global level. We show that cellular neural networks (CNN's) can exhibit "small-world phenomenon". We generalize the "characteristic path length" from previous works on "small-world phenomenon" into a "characteristic coupling strength" for measuring the average coupling strength of the outputs of CNN's. We also provide a simplified algorithm for calculating the "characteristic coupling strength" with a reasonable amount of computing time. We define a "clustering coefficient" and show how it can be calculated by a horizontal "hole detection" CNN, followed by a vertical "hole detection" CNN. Evolutions of the game-of-life CNN with different initial conditions are used to illustrate the emergence of a "small-world phenomenon". Our results show that the well-known game-of-life CNN is not a small-world network. However, generalized CNN life games whose individuals have strong mobility and high survival rate can exhibit small-world phenomenon in a robust way. Our simulations confirm the conjecture that a population with a strong mobility is more likely to qualify as a small world. CNN games whose individuals have weak mobility can also exhibit a small-world phenomenon under a proper choice of initial conditions. However, the resulting small worlds depend strongly on the initial conditions, and are therefore not robust.


1980 ◽  
Vol 12 (1) ◽  
pp. 81-93 ◽  
Author(s):  
B. Klein ◽  
P. D. M. MacDonald

The multitype continuous-time Markov branching process has many biological applications where the environmental factors vary in a periodic manner. Circadian or diurnal rhythms in cell kinetics are an important example. It is shown that in the supercritical positively regular case the proportions of individuals of various types converge in probability to a non-random periodic vector, independent of the initial conditions, while the absolute numbers of individuals of various types converge in probability to that vector multiplied by a random variable whose distribution depends on the initial conditions. It is noted that the proofs are straightforward extensions of the well-known results for a constant environment.


Sign in / Sign up

Export Citation Format

Share Document