scholarly journals SOMO-mOptimization Algorithm with Multiple Winners

2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Wei Wu ◽  
Atlas Khan

Self-organizing map (SOM) neural networks have been widely applied in information sciences. In particular, Su and Zhao proposes in (2009) an SOM-based optimization (SOMO) algorithm in order to find a wining neuron, through a competitive learning process, that stands for the minimum of an objective function. In this paper, we generalize the SOM-based optimization (SOMO) algorithm to so-called SOMO-malgorithm withmwinning neurons. Numerical experiments show that, form>1, SOMO-malgorithm converges faster than SOM-based optimization (SOMO) algorithm when used for finding the minimum of functions. More importantly, SOMO-malgorithm withm≥2can be used to find two or more minimums simultaneously in a single learning iteration process, while the original SOM-based optimization (SOMO) algorithm has to fulfil the same task much less efficiently by restarting the learning iteration process twice or more times.

Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


Author(s):  
José García-Rodríguez ◽  
Francisco Flórez-Revuelta ◽  
Juan Manuel García-Chamizo

Self-organising neural networks try to preserve the topology of an input space by means of their competitive learning. This capacity has been used, among others, for the representation of objects and their motion. In this work we use a kind of self-organising network, the Growing Neural Gas, to represent deformations in objects along a sequence of images. As a result of an adaptive process the objects are represented by a topology representing graph that constitutes an induced Delaunay triangulation of their shapes. These maps adapt the changes in the objects topology without reset the learning process.


2008 ◽  
Vol 34 (6) ◽  
pp. 782-790 ◽  
Author(s):  
Manuel Alvarez-Guerra ◽  
Cristina González-Piñuela ◽  
Ana Andrés ◽  
Berta Galán ◽  
Javier R. Viguri

2009 ◽  
Vol 18 (08) ◽  
pp. 1353-1367 ◽  
Author(s):  
DONG-CHUL PARK

A Centroid Neural Network with Weighted Features (CNN-WF) is proposed and presented in this paper. The proposed CNN-WF is based on a Centroid Neural Network (CNN), an effective clustering tool that has been successfully applied to various problems. In order to evaluate the importance of each feature in a set of data, a feature weighting concept is introduced to the Centroid Neural Network in the proposed algorithm. The weight update equations for CNN-WF are derived by applying the Lagrange multiplier procedure to the objective function constructed for CNN-WF in this paper. The use of weighted features makes it possible to assess the importance of each feature and to reject features that can be considered as noise in data. Experiments on a synthetic data set and a typical image compression problem show that the proposed CNN-WF can assess the importance of each feature and the proposed CNN-WF outperforms conventional algorithms including the Self-Organizing Map (SOM) and CNN in terms of clustering accuracy.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Georgios Detorakis ◽  
Antoine Chaillet ◽  
Nicolas P. Rougier

AbstractWe provide theoretical conditions guaranteeing that a self-organizing map efficiently develops representations of the input space. The study relies on a neural field model of spatiotemporal activity in area 3b of the primary somatosensory cortex. We rely on Lyapunov’s theory for neural fields to derive theoretical conditions for stability. We verify the theoretical conditions by numerical experiments. The analysis highlights the key role played by the balance between excitation and inhibition of lateral synaptic coupling and the strength of synaptic gains in the formation and maintenance of self-organizing maps.


Author(s):  
Yongquan Yan ◽  
Yu Zhu ◽  
Yanjun Li

Since resource consumption is the main reason for software aging occurrences, many methods have been applied to accurately predict the resource consumption series. Among these methods, neural networks are powerfully applied to forecast the series data. For some existing problems of artificial neural networks such as the choice of initialization and local optimization, the improvements of neural networks are not only a hot research topic in the field of time series prediction but also a research hotspot in resource consumption prediction of software aging. In this paper, we propose a method for resource consumption prediction of software aging using deep belief nets (DBNs) with the restricted Boltzmann machine (RBM). This presented method contains the following steps. First, a pre-processing is introduced by two parts: smoothing data by a self-organizing map (SOM) and removing a linear trend by a difference method. Second, a method, DBN with two RBMs, is presented to capture the features and forecast future values. Third, a glowworm swarm optimization (GSO) method is used to learn the hyper-parameters of DBN with two RBMs. In the experiments, two types of resource consumption series are used to validate our proposed method compared with some state-of-the-art algorithms.


Sign in / Sign up

Export Citation Format

Share Document