scholarly journals Guiding Neuroevolution with Structural Objectives

2020 ◽  
Vol 28 (1) ◽  
pp. 115-140 ◽  
Author(s):  
Kai Olav Ellefsen ◽  
Joost Huizinga ◽  
Jim Torresen

The structure and performance of neural networks are intimately connected, and by use of evolutionary algorithms, neural network structures optimally adapted to a given task can be explored. Guiding such neuroevolution with additional objectives related to network structure has been shown to improve performance in some cases, especially when modular neural networks are beneficial. However, apart from objectives aiming to make networks more modular, such structural objectives have not been widely explored. We propose two new structural objectives and test their ability to guide evolving neural networks on two problems which can benefit from decomposition into subtasks. The first structural objective guides evolution to align neural networks with a user-recommended decomposition pattern. Intuitively, this should be a powerful guiding target for problems where human users can easily identify a structure. The second structural objective guides evolution towards a population with a high diversity in decomposition patterns. This results in exploration of many different ways to decompose a problem, allowing evolution to find good decompositions faster. Tests on our target problems reveal that both methods perform well on a problem with a very clear and decomposable structure. However, on a problem where the optimal decomposition is less obvious, the structural diversity objective is found to outcompete other structural objectives—and this technique can even increase performance on problems without any decomposable structure at all.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Lingfeng Wang

The TV show rating analysis and prediction system can collect and transmit information more quickly and quickly upload the information to the database. The convolutional neural network is a multilayer neural network structure that simulates the operating mechanism of biological vision systems. It is a neural network composed of multiple convolutional layers and downsampling layers sequentially connected. It can obtain useful feature descriptions from original data and is an effective method to extract features from data. At present, convolutional neural networks have become a research hotspot in speech recognition, image recognition and classification, natural language processing, and other fields and have been widely and successfully applied in these fields. Therefore, this paper introduces the convolutional neural network structure to predict the TV program rating data. First, it briefly introduces artificial neural networks and deep learning methods and focuses on the algorithm principles of convolutional neural networks and support vector machines. Then, we improve the convolutional neural network to fit the TV program rating data and finally apply the two prediction models to the TV program rating data prediction. We improve the convolutional neural network TV program rating prediction model and combine the advantages of the convolutional neural network to extract effective features and good classification and prediction capabilities to improve the prediction accuracy. Through simulation comparison, we verify the feasibility and effectiveness of the TV program rating prediction model given in this article.


Author(s):  
Goran Klepac

Developed neural networks as an output could have numerous potential outputs caused by numerous combinations of input values. When we are in position to find optimal combination of input values for achieving specific output value within neural network model it is not a trivial task. This request comes from profiling purposes if, for example, neural network gives information of specific profile regarding input or recommendation system realized by neural networks, etc. Utilizing evolutionary algorithms like particle swarm optimization algorithm, which will be illustrated in this chapter, can solve these problems.


2020 ◽  
Vol 174 ◽  
pp. 03023
Author(s):  
Yelena Vasileva ◽  
Aleksandr Nevedrov ◽  
Sergey Subbotin

Process performance of coking plants are based on data on the yield of by-products of coking coal and their quality, therefore, much attention is paid to the issues of their analysis. In view of the complexity and insufficient knowledge of the relationship between these parameters, mathematical modeling of this dependence using neural networks is of great interest. Based on a mathematical analysis of experimental data on the quality indicators of coal, coal concentrates and the by-product yield, neural network mathematical models have been developed to forecast the parameters under study. The neural network is based on the Ward’s network. Based on the results of the research, the application “Intelligent Information System for Forecasting By-product Yield” was created, which implements neural networks [1]. The relative forecasting error for the parameter “coke” is 0.64±0.23%, “coal tar” is 19.53±5.25%, “crude benzene” is 10.02±2.83%, and “coke gas” is 5.11±1.34%. A comparative analysis of the data obtained using the developed design method is carried out, with the simulation results using existing methods, as well as with the production values of by-products yield.


2017 ◽  
Vol 26 (4) ◽  
pp. 625-639 ◽  
Author(s):  
Gang Wang

AbstractCurrently, most artificial neural networks (ANNs) represent relations, such as back-propagation neural network, in the manner of functional approximation. This kind of ANN is good at representing the numeric relations or ratios between things. However, for representing logical relations, these ANNs have disadvantages because their representation is in the form of ratio. Therefore, to represent logical relations directly, we propose a novel ANN model called probabilistic logical dynamical neural network (PLDNN). Inhibitory links are introduced to connect exciting links rather than neurons so as to inhibit the connected exciting links conditionally to make them represent logical relations correctly. The probabilities are assigned to the weights of links to indicate the belief degree in logical relations under uncertain situations. Moreover, the network structure of PLDNN is less limited in topology than traditional ANNs, and it is dynamically built completely according to the data to make it adaptive. PLDNN uses both the weights of links and the interconnection structure to memorize more information. The model could be applied to represent logical relations as the complement to numeric ANNs.


2019 ◽  
Vol 13 (2) ◽  
pp. 228
Author(s):  
Abdel Latif Abu Dalhoum ◽  
Mohammed Al-Rawi

Equivalence of computational systems can assist in obtaining abstract systems, and thus enable better understanding of issues related their design and performance. For more than four decades, artificial neural networks have been used in many scientific applications to solve classification problems as well as other problems. Since the time of their introduction, multilayer feedforward neural network referred as Ordinary Neural Network (ONN), that contains only summation activation (Sigma) neurons, and multilayer feedforward High-order Neural Network (HONN), that contains Sigma neurons, and product activation (Pi) neurons, have been treated in the literature as different entities. In this work, we studied whether HONNs are mathematically equivalent to ONNs. We have proved that every HONN could be converted to some equivalent ONN. In most cases, one just needs to modify the neuronal transfer function of the Pi neuron to convert it to a Sigma neuron. The theorems that we have derived clearly show that the original HONN and its corresponding equivalent ONN would give exactly the same output, which means; they can both be used to perform exactly the same functionality. We also derived equivalence theorems for several other non-standard neural networks, for example, recurrent HONNs and HONNs with translated multiplicative neurons. This work rejects the hypothesis that HONNs and ONNs are different entities, a conclusion that might initiate a new research frontier in artificial neural network research.


Author(s):  
Houcheng Tang ◽  
Leila Notash

Abstract In this paper, a neural network based transfer learning approach of inverse displacement analysis of robot manipulators is studied. Neural networks with different structures are applied utilizing data from different configurations of a manipulator for training purposes. Then the transfer learning was conducted between manipulators with different geometric layouts. The training is performed on both the neural networks with pretrained initial parameters and the neural networks with random initialization. To investigate the rate of convergence of data fitting comprehensively, different values of performance targets are defined. The computing epochs and performance measures are compared. It is presented that, depending on the structure of neural network, the proposed transfer learning can accelerate the training process and achieve higher accuracy. For different datasets, the transfer learning approach improves their performance differently.


2020 ◽  
Vol 10 (9) ◽  
pp. 3119 ◽  
Author(s):  
Dariusz Jamróz ◽  
Tomasz Niedoba ◽  
Paulina Pięta ◽  
Agnieszka Surowiak

The paper presents a way of combining neural networks with evolutionary algorithms in order to find optimal parameters of the copper flotation enrichment process. The neural network was used in order to build a model describing the flotation process. The network learning was carried out with the use of samples from previous empirical measurements of the actual process. The model created in this way made it possible to find optimal parameters not only from among the measurement spaces, but also those that go beyond the measurements. Then, evolutionary algorithms were used in order to find optimal flotation parameters. The learned neural network previously described was used to calculate the criterion in the evolutionary algorithm.


Author(s):  
Sarat Chandra Nayak ◽  
Bijan Bihari Misra ◽  
Himansu Sekhar Behera

Financial time series forecasting has been regarded as a challenging issue because of successful prediction could yield significant profit, hence require an efficient prediction system. Conventional ANN based models are not competent systems. Higher order neural networks have several advantages over traditional neural networks such as stronger approximation, higher fault tolerance capacity and faster convergence. With the aim of achieving improved forecasting accuracy, this article develops and evaluates the performance of an adaptive single layer second order neural network with GA based training (ASONN-GA). The global search ability of GA has been incorporated with the better generalization ability of a second order neural network and the model is found quite capable in handling the uncertainties and nonlinearities associated with the financial time series. The model takes minimal input data and considered the partially optimized weight set from previous training, hence a significant reduction in training time. The efficiency of the model has been evaluated by forecasting one-step-ahead closing prices and exchange rates of five real stock markets and it is revealed that the ASONN-GA model achieves better forecasting accuracy over other state of the art models.


Author(s):  
Rahul Kala ◽  
Anupam Shukla ◽  
Ritu Tiwari

The complexity of problems has led to a shift toward the use of modular neural networks in place of traditional neural networks. The number of inputs to neural networks must be kept within manageable limits to escape from the curse of dimensionality. Attribute division is a novel concept to reduce the problem dimensionality without losing information. In this paper, the authors use Genetic Algorithms to determine the optimal distribution of the parameters to the various modules of the modular neural network. The attribute set is divided into the various modules. Each module computes the output using its own list of attributes. The individual results are then integrated by an integrator. This framework is used for the diagnosis of breast cancer. Experimental results show that optimal distribution strategy exceeds the well-known methods for the diagnosis of the disease.


Author(s):  
Hao-Yun Chen

Traditionally, software programmers write a series of hard-coded rules to instruct a machine, step by step. However, with the ubiquity of neural networks, instead of giving specific instructions, programmers can write a skeleton of code to build a neural network structure, and then feed the machine with data sets, in order to have the machine write code by itself. Software containing the code written in this manner changes and evolves over time as new data sets are input and processed. This characteristic distinguishes it markedly from traditional software, and is partly the reason why it is referred to as ‘software 2.0’. Yet the vagueness of the scope of such software might make it ineligible for protection by copyright law. To properly understand and address this issue, this chapter will first review the current scope of computer program protection under copyright laws, and point out the potential inherent issues arising from the application of copyright law to software 2.0. After identifying related copyright law issues, this chapter will then examine the possible justification for protecting computer programs in the context of software 2.0, aiming to explore whether new exclusivity should be granted or not under copyright law, and if not, what alternatives are available to provide protection for the investment in the creation and maintenance of software 2.0.


Sign in / Sign up

Export Citation Format

Share Document