Studies on Effects of Initialization on Structure Formationand Generalization of Structural Learning with Forgetting

Author(s):  
Hiroshi Shiratsuchi ◽  
◽  
Hiromu Gotanda ◽  
Katsuhiro Inoue ◽  
Kousuke Kumamaru ◽  
...  

In this paper, our proposed initialization for multilayer neural networks (NN) applies to the structural learning with forgetting. Initialization consists of two steps: weights of hidden units are initialized so that their hyperplanes pass through the center of gravity of an input pattern set, and weights of output units are initialized to zero. Several simulations were performed to study how the initialization effects the structure formation of the NN. From the simulation result, it was confirmed that the initialization gives better network structure and higher generalization ability.

Author(s):  
Hiroshi Shiratsuchi ◽  
◽  
Hiromu Gotanda ◽  
Katsuhiro Inoue ◽  
Kousuke Kumamaru ◽  
...  

This paper studies how our previously proposed initialization effects the rule extraction of neural networks by structural learning with forgetting. The proposed initialization consists of two steps: (1) initializing weights of hidden units so that their separation hyperplanes should pass through the center of an input pattern set and (2) initializing those of output units to zero. From simulation results on Boolean function discovery problems with 5 and 7 inputs, it has been confirmed that the proposed initialization yields a simpler network structure and higher rule extraction ability than the conventional initialization giving uniform random number to all the initial weights of the network.


2001 ◽  
Vol 13 (12) ◽  
pp. 2851-2863 ◽  
Author(s):  
Masaki Ishii ◽  
Itsuo Kumazawa

In this article, we present a technique to improve the generalization ability of multilayer neural networks. The proposed method introduces linear constraints on weight representation based on the invariance natures of training targets. We propose a learning method that introduces effective linear constraints into an error function as a penalty term. Furthermore, introduction of such constraints leads to reduction of the VC dimension of neural networks. We show bounds on the VC dimension of the neural networks with such constraints. Finally, we demonstrate the effectiveness of the proposed method by some experiments.


2013 ◽  
Vol 58 (3) ◽  
pp. 871-875
Author(s):  
A. Herberg

Abstract This article outlines a methodology of modeling self-induced vibrations that occur in the course of machining of metal objects, i.e. when shaping casting patterns on CNC machining centers. The modeling process presented here is based on an algorithm that makes use of local model fuzzy-neural networks. The algorithm falls back on the advantages of fuzzy systems with Takagi-Sugeno-Kanga (TSK) consequences and neural networks with auxiliary modules that help optimize and shorten the time needed to identify the best possible network structure. The modeling of self-induced vibrations allows analyzing how the vibrations come into being. This in turn makes it possible to develop effective ways of eliminating these vibrations and, ultimately, designing a practical control system that would dispose of the vibrations altogether.


Sign in / Sign up

Export Citation Format

Share Document