uniform random numbers
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 0)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
A.F. Deon ◽  
V.A. Onuchin ◽  
Yu.A. Menyaev

Various pseudorandom number generation algorithms may be used to create a discrete stochastic plane. If a Cartesian completeness property is required of the plane, it must be uniform. The point is, employing the concept of uncontrolled random number generation may yield low-quality results, since original sequences may omit random numbers or not be sufficiently uniform. We present a novel approach for generating stochastic Cartesian planes according to the model of complete twister sequences featuring uniform random numbers without omissions or repetitions. Simulation results confirm that the random planes obtained are indeed perfectly uniform. Moreover, recombining the original complete uniform sequence parameters allows the number of planes created to be significantly increased without using any extra random access memory.


2009 ◽  
Vol 9 (22) ◽  
pp. 4071-4075 ◽  
Author(s):  
R. Ayanzadeh ◽  
K. Hassani ◽  
Y. Moghaddas ◽  
H. Gheiby ◽  
S. Setayeshi

1997 ◽  
Vol 9 (5) ◽  
pp. 398-405
Author(s):  
Hiromu Gotanda ◽  
◽  
Hiroshi Shiratsuchi ◽  
Katsuhiro Inoue ◽  
Kousuke Kumamaru ◽  
...  

This paper shows that multilayer nets of equal structure allow the same cardinality of admissible solutions for learning tasks whose input patterns are related by affine transform, even if their sigmoid functions are different in polarity and range. This result can be applied to a scaling problem arising in building nets in analog hardware. In input patterns and sigmoid functions multiplied by scaling factor<I> k</I>, separation and generalization can be preserved if weights are set to 1/<I>k</I> times the original while keeping bias values intact. With such initial weights and biases as above, the converging behavior of back propagation (BP) learning in the scaled environment becomes equivalent to that in the original environment, provided that learning coefficients are multiplied by 1/<I>k</I>2 for bias and by 1/<I>k</I>4 for weight updates. In BP learning in an ordinary way with both weights and biases initialized by uniform random numbers of identical distribution and with the learning coefficient adopted for both weight and bias update, it is shown by simulation that initial values resulting in good convergence decease with increasing<I> k</I>.


Sign in / Sign up

Export Citation Format

Share Document