scholarly journals On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach

VLSI Design ◽  
1996 ◽  
Vol 4 (3) ◽  
pp. 207-215 ◽  
Author(s):  
M. Srinivas ◽  
L. M. Patnaik

Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs) and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.To account for the relatively quick convergence of the gradient descent methods, we analyze the landscape of the COP-based cost function. We prove that the cost function is unimodal in the search space. This feature makes the cost function amenable to optimization by gradient-descent techniques as compared to random search methods such as Genetic Algorithms.

Author(s):  
Ashraf O. Nassef ◽  
Hesham A. Hegazi ◽  
Sayed M. Metwalli

Abstract The hybridization of different optimization methods have been used to find the optimum solution of design problems. While random search techniques, such as genetic algorithms and simulated annealing, have a high probability of achieving global optimality, they usually arrive at a near optimal solution due to their random nature. On the other hand direct search methods are efficient optimization techniques but linger in local minima if the objective function is multi-modal. This paper presents the optimization of C-frame cross-section using a hybrid optimization algorithm. Real coded genetic algorithms are used as a random search method, while Nelder-Mead is used as a direct search method, where the result of the genetic algorithm search is used as the starting point of direct search. Traditionally, the cross-section of C-frame belonged to a set of primitive shapes, which included I, T, trapezoidal, circular and rectangular sections. The cross-sectional shape is represented by a non-uniform rational B-Splines (NURBS) in order to give it a kind of shape flexibility. The results showed that the use of Nelder-Mead with Real coded Genetic Algorithms has been very significant in improving the optimum shape of a solid C-frame cross-section subjected to a combined tension and bending stresses. The hybrid optimization method could be extended to more complex shape optimization problems.


2020 ◽  
Vol 30 (6) ◽  
pp. 1645-1663
Author(s):  
Ömer Deniz Akyildiz ◽  
Dan Crisan ◽  
Joaquín Míguez

Abstract We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth-order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad “flat” regions which are hard to minimize using gradient-based techniques.


This paper presents a learning review of various strategies related to the improvement of the reliability for the deregulated system, for instance, Genetic Algorithms (GA), Tabu Search (TS), heuristic calculations and system based techniques. These methodologies were produced for advancing reliability as either software or hardware exclusively. Besides, the cost segments related with limit utilize and reliability advantage charges are resolved and various optimization techniques are acknowledged of action of the goal work


2018 ◽  
Vol 11 (12) ◽  
pp. 4739-4754 ◽  
Author(s):  
Vladislav Bastrikov ◽  
Natasha MacBean ◽  
Cédric Bacour ◽  
Diego Santaren ◽  
Sylvain Kuppel ◽  
...  

Abstract. Land surface models (LSMs), which form the land component of earth system models, rely on numerous processes for describing carbon, water and energy budgets, often associated with highly uncertain parameters. Data assimilation (DA) is a useful approach for optimising the most critical parameters in order to improve model accuracy and refine future climate predictions. In this study, we compare two different DA methods for optimising the parameters of seven plant functional types (PFTs) of the ORCHIDEE LSM using daily averaged eddy-covariance observations of net ecosystem exchange and latent heat flux at 78 sites across the globe. We perform a technical investigation of two classes of minimisation methods – local gradient-based (the L-BFGS-B algorithm, limited memory Broyden–Fletcher–Goldfarb–Shanno algorithm with bound constraints) and global random search (the genetic algorithm) – by evaluating their relative performance in terms of the model–data fit and the difference in retrieved parameter values. We examine the performance of each method for two cases: when optimising parameters at each site independently (“single-site” approach) and when simultaneously optimising the model at all sites for a given PFT using a common set of parameters (“multi-site” approach). We find that for the single site case the random search algorithm results in lower values of the cost function (i.e. lower model–data root mean square differences) than the gradient-based method; the difference between the two methods is smaller for the multi-site optimisation due to a smoothing of the cost function shape with a greater number of observations. The spread of the cost function, when performing the same tests with 16 random first-guess parameters, is much larger with the gradient-based method, due to the higher likelihood of being trapped in local minima. When using pseudo-observation tests, the genetic algorithm results in a closer approximation of the true posterior parameter value in the L-BFGS-B algorithm. We demonstrate the advantages and challenges of different DA techniques and provide some advice on using it for the LSM parameter optimisation.


Author(s):  
TAO WANG ◽  
XIAOLIANG XING ◽  
XINHUA ZHUANG

In this paper, we describe an optimal learning algorithm for designing one-layer neural networks by means of global minimization. Taking the properties of a well-defined neural network into account, we derive a cost function to measure the goodness of the network quantitatively. The connection weights are determined by the gradient descent rule to minimize the cost function. The optimal learning algorithm is formed as either the unconstraint-based or the constraint-based minimization problem. It ensures the realization of each desired associative mapping with the best noise reduction ability in the sense of optimization. We also investigate the storage capacity of the neural network, the degree of noise reduction for a desired associative mapping, and the convergence of the learning algorithm in an analytic way. Finally, a large number of computer experimental results are presented.


2019 ◽  
Vol 24 (2) ◽  
pp. 17-21
Author(s):  
Arjun Singh Saud ◽  
Subarna Shakya

The stock price is the cost of purchasing a security or stock in a stock exchange. The stock price prediction has been the aim of investors since the beginning of the stock market. It is the act of forecasting the future price of a company's stock. Nowadays, deep learning techniques are widely used for identifying the stock trends from large amounts of past data. This research has experimented two big and robust commercial banks listed in the Nepal Stock Exchange (NEPSE) and compared stock price prediction performance of GRU with three widely used gradient descent optimization techniques: Momentum, RMSProp, and Adam. GRU with Adam is more accurate and consistent approach for predicting stock prices from the present study.


2012 ◽  
Vol 9 (6) ◽  
pp. 3593-3642
Author(s):  
H. Sumata ◽  
F. Kauker ◽  
R. Gerdes ◽  
C. Köberle ◽  
M. Karcher

Abstract. Two types of optimization methods were applied to a parameter optimization problem in a coupled ocean–sea ice model, and applicability and efficiency of the respective methods were examined. One is a finite difference method based on a traditional gradient descent approach, while the other adopts genetic algorithms as an example of stochastic approaches. Several series of parameter optimization experiments were performed by minimizing a cost function composed of model–data misfit of ice concentration, ice drift velocity and ice thickness. The finite difference method fails to estimate optimal parameters due to an ill-shaped nature of the cost function, whereas the genetic algorithms can effectively estimate near optimal parameters with a practical number of iterations. The results of the study indicate that a sophisticated stochastic approach is of practical use to a parameter optimization of a coupled ocean–sea ice model.


Author(s):  
Luciano T. Vieira ◽  
Beatriz de S. L. P. de Lima ◽  
Alexandre G. Evsukoff ◽  
Breno P. Jacob

The purpose of this work is to describe the application of Genetic Algorithms in the search of the best configuration of catenary riser systems in deep waters. Particularly, an optimization methodology based on genetic algorithms is implemented on a computer program, in order to seek an optimum geometric configuration for a steel catenary riser in a lazy-wave configuration. This problem is characterized by a very large space of possible solutions; the use of traditional methods is an exhaustive work, since there is a large number of variables and parameters that define this type of system. Genetic algorithms are more robust than the more commonly used optimization techniques. They use random choice as a tool to guide a search toward regions of the search space with likely improvements. Some differences such as the coding of the parameter set, the search from a population of points, the use of objective functions and randomized operators are factors that contribute to the robustness of a genetic algorithm and result in advantages over traditional techniques. The implemented methodology has as baseline one or more criteria established by the experience of the offshore engineer. The implementation of an intelligent methodology oriented specifically to the optimization and synthesis of riser configurations will not only facilitate the work of manipulating a huge mass of data, but also assure the best alternative between all the possible ones, searching in a much larger space of possible solutions than classical methods.


Author(s):  
Carlo L. Bottasso ◽  
Alessandro Croce ◽  
Stefano Sartirana ◽  
Boris I. Prilutsky

We propose a computational procedure for inferring the cost functions that, according to the Principle of Optimality, underlie experimentally observed motor strategies. This work tries to overcome the need to hypothesize the cost functions, extracting this non-directly observable information from experimental data. Optimality criteria of observed motor tasks are here indirectly derived using: a) a mathematical model of the bio-system; and b) a parametric mathematical model of the possible cost functions, i.e. a search space constructed in such a way as to presumably contain the unknown function that was used by the bio-system in the given motor task of interest. The cost function that best matches the experimental data is identified within the search space by solving a nested optimization problem. This problem can be recast as a non-linear programming problem and therefore solved using standard techniques. The proposed methodology is tested on representative examples.


2020 ◽  
Vol 10 (3) ◽  
pp. 1073 ◽  
Author(s):  
Dokkyun Yi ◽  
Jaehyun Ahn ◽  
Sangmin Ji

A machine is taught by finding the minimum value of the cost function which is induced by learning data. Unfortunately, as the amount of learning increases, the non-liner activation function in the artificial neural network (ANN), the complexity of the artificial intelligence structures, and the cost function’s non-convex complexity all increase. We know that a non-convex function has local minimums, and that the first derivative of the cost function is zero at a local minimum. Therefore, the methods based on a gradient descent optimization do not undergo further change when they fall to a local minimum because they are based on the first derivative of the cost function. This paper introduces a novel optimization method to make machine learning more efficient. In other words, we construct an effective optimization method for non-convex cost function. The proposed method solves the problem of falling into a local minimum by adding the cost function in the parameter update rule of the ADAM method. We prove the convergence of the sequences generated from the proposed method and the superiority of the proposed method by numerical comparison with gradient descent (GD, ADAM, and AdaMax).


Sign in / Sign up

Export Citation Format

Share Document