scholarly journals A self organizing map (SOM) based electric load classification

2018 ◽  
Vol 31 (4) ◽  
pp. 571-583
Author(s):  
Mahdi Farhadi

It is of vital importance to use proper training data to perform accurate shortterm load forecasting (STLF) based on artificial neural networks. The pattern of the loads which are used for the training of Kohonen Self Organizing Map (SOM) neural network in STLF models should be of the highest similarity with the pattern of the electric load of the forecasting day. In this paper, an electric load classifier model is proposed which relies on the pattern recognition capability of SOM. The performance of the proposed electric load classifier method is evaluated by Iran electric grid data. The proposed method requires a very few number of training samples for training the Kohonen neural network of the STLF model and can accurately predict electric load in the network.

2014 ◽  
Vol 563 ◽  
pp. 308-311 ◽  
Author(s):  
Yu Lian Jiang

For a water polo ball game there are multiple water polos and multiple robotic fishes in each team, seeking a reasonable task allocation plan is the key point to win the game. To resolve the problem, this paper proposed a multi-target task allocation method based on the Self-organizing map (SOM) neural network. This method takes the position of the water polos as the input vector, competes and compares the position of the water polos and robotic fishes, outputs the corresponding robotic fish of each water polo. The robotic fish will move toward the target water polo when the weight was adjusted, and will finally reach the target water polo. Simulations show that the score of the team using this method is higher than another team. The results prove the correctness and reliability of this method.


2018 ◽  
Vol 28 (2) ◽  
pp. 411-424 ◽  
Author(s):  
Serkan Kartal ◽  
Mustafa Oral ◽  
Buse Melis Ozyildirim

Abstract In a general regression neural network (GRNN), the number of neurons in the pattern layer is proportional to the number of training samples in the dataset. The use of a GRNN in applications that have relatively large datasets becomes troublesome due to the architecture and speed required. The great number of neurons in the pattern layer requires a substantial increase in memory usage and causes a substantial decrease in calculation speed. Therefore, there is a strong need for pattern layer size reduction. In this study, a self-organizing map (SOM) structure is introduced as a pre-processor for the GRNN. First, an SOM is generated for the training dataset. Second, each training record is labelled with the most similar map unit. Lastly, when a new test record is applied to the network, the most similar map units are detected, and the training data that have the same labels as the detected units are fed into the network instead of the entire training dataset. This scheme enables a considerable reduction in the pattern layer size. The proposed hybrid model was evaluated by using fifteen benchmark test functions and eight different UCI datasets. According to the simulation results, the proposed model significantly simplifies the GRNN’s structure without any performance loss.


1995 ◽  
Vol 7 (4) ◽  
pp. 822-844 ◽  
Author(s):  
Peter Tiňo ◽  
Jozef Šajda

A hybrid recurrent neural network is shown to learn small initial mealy machines (that can be thought of as translation machines translating input strings to corresponding output strings, as opposed to recognition automata that classify strings as either grammatical or nongrammatical) from positive training samples. A well-trained neural net is then presented once again with the training set and a Kohonen self-organizing map with the “star” topology of neurons is used to quantize recurrent network state space into distinct regions representing corresponding states of a mealy machine being learned. This enables us to extract the learned mealy machine from the trained recurrent network. One neural network (Kohonen self-organizing map) is used to extract meaningful information from another network (recurrent neural network).


2021 ◽  
Author(s):  
Gamal Alusta ◽  
Hossein Algdamsi ◽  
Ahmed Amtereg ◽  
Ammar Agnia ◽  
Ahmed Alkouh ◽  
...  

Abstract In this paper we introduce for the first time an innovative approach for deriving Oil Formation Volume Factor (Bo) by mean of artificial intelligence method. In a new proposed application Self-Organizing Map (SOM) technology has been merged with statistical prediction methods integrating in a single step dimensionality reduction, extraction of input data structure pattern and prediction of formation volume factor Bo. The SOM neural network method applies an unsupervised training algorithm combined with back propagation neural network BPNN to subdivide the entire set of PVT input into different patterns identifying a set of data that have something in common and run individual MLFF ANN models for each specific PVT cluster and computing Bo. PVT data for more than two hundred oil samples (total of 804 data points) were collected from the north African region representing different basin and covering a greater geographical area were used in this study. To establish clear Bound on the accuracy of Bo determination several statistical parameters and terminology included in the presentation of the result from SOM-Neural Network solution. the main outcome is the reduction of error obtained by the new proposed competitive Learning Structure integration of SOM and MLFF ANN to less than 1 % compared to other method. however also investigated in this work five independents means of model driven and data driven approach for estimating Bo theses are 1) Optimal Transformations for Multiple Regression as introduced by (McCain, 1998) using alternating conditional expectations (ACE) for selecting multiple regression transformations 2), Genetic programing and heuristic modeling using Symbolic Regression (SR) and cross validation for model automatic tuning 3) Machine learning predictive model (Nearest Neighbor Regression, Kernel Ridge regression, Gaussian Process Regression (GPR), Random Forest Regression (RF), Support Vector Regression (SVM), Decision Tree Regression (DT), Gradient Boosting Machine Regression (GBM), Group modeling data handling (GMDH). Regression Model Accuracy Metrics (Average absolute relative error, R-square), diagnostic plot was used to address the more adequate techniques and model for predicting Bo.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Khaled Ben Khalifa ◽  
Ahmed Ghazi Blaiech ◽  
Mohamed Hédi Bedoui

In this article, we propose to design a new modular architecture for a self-organizing map (SOM) neural network. The proposed approach, called systolic-SOM (SSOM), is based on the use of a generic model inspired by a systolic movement. This model is formed by two levels of nested parallelism of neurons and connections. Thus, this solution provides a distributed set of independent computations between the processing units called neuroprocessors (NPs) which define the SSOM architecture. The NP modules have an innovative architecture compared to those proposed in the literature. Indeed, each NP performs three different tasks without requiring additional external modules. To validate our approach, we evaluate the performance of several SOM network architectures after their integration on an FPGA support. This architecture has achieved a performance almost twice as fast as that obtained in the recent literature.


2010 ◽  
Vol 19 (01) ◽  
pp. 191-202 ◽  
Author(s):  
VITOANTONIO BEVILACQUA ◽  
GIUSEPPE MASTRONARDI ◽  
VITO SANTARCANGELO ◽  
ROCCO SCARAMUZZI

In this paper, two different methodologies respectively based on an unsupervised self-organizing (SOM) neural network and on a graph matching are shown and discussed to validate the performance of a new 3D facial feature identification and localization algorithm. Experiments are performed on a dataset of 23 3D faces acquired by a 3D laser camera at eBIS lab with pose and expression variations. In particular results referred to five nose landmarks are encouraging and reveal the validity of this approach that although low computational complexity and the small number of landmarks guarantees an average face recognition performance greater than 80%.


Sign in / Sign up

Export Citation Format

Share Document