scholarly journals Minimization of the Total Traveling Distance and Maximum Distance by Using a Transformed-Based Encoding EDA to Solve the Multiple Traveling Salesmen Problem

2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
S. H. Chen

Estimation of distribution algorithms (EDAs) have been used to solve numerous hard problems. However, their use with in-group optimization problems has not been discussed extensively in the literature. A well-known in-group optimization problem is the multiple traveling salesmen problem (mTSP), which involves simultaneous assignment and sequencing procedures and are shown in different forms. This paper presents a new algorithm, namedEDAMLA, which is based on self-guided genetic algorithm with a minimum loading assignment (MLA) rule. This strategy uses the transformed-based encoding approach instead of direct encoding. The solution space of the proposed method is onlyn!. We compare the proposed algorithm against the optimal direct encoding technique, the two-part encoding genetic algorithm, and, in experiments on 34 TSP instances drawn from the TSPLIB, find that its solution space isn!n-1m-1. The scale of the experiments exceeded that presented in prior studies. The results show that the proposed algorithm was superior to the two-part encoding genetic algorithm in terms of minimizing the total traveling distance. Notably, the proposed algorithm did not cause a longer traveling distance when the number of salesmen was increased from 3 to 10. The results suggest that EDA researchers should employ the MLA rule instead of direct encoding in their proposed algorithms.

2005 ◽  
Vol 13 (1) ◽  
pp. 125-143 ◽  
Author(s):  
Yong Gao ◽  
Joseph Culberson

In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure.


2013 ◽  
Vol 373-375 ◽  
pp. 1089-1092
Author(s):  
Fa Hong Yu ◽  
Wei Zhi Liao ◽  
Mei Jia Chen

Estimation of distribution algorithms (EDAs) is a method for solving NP-hard problem. But it is hard to find global optimization quickly for some problems, especially for traveling salesman problem (TSP) that is a classical NP-hard combinatorial optimization problem. To solve TSP effectively, a novel estimation of distribution algorithm (NEDA ) is provided, which can solve the conflict between population diversity and algorithm convergence. The experimental results show that the performance of NEDA is effective.


2013 ◽  
Vol 21 (3) ◽  
pp. 471-495 ◽  
Author(s):  
Carlos Echegoyen ◽  
Alexander Mendiburu ◽  
Roberto Santana ◽  
Jose A. Lozano

Understanding the relationship between a search algorithm and the space of problems is a fundamental issue in the optimization field. In this paper, we lay the foundations to elaborate taxonomies of problems under estimation of distribution algorithms (EDAs). By using an infinite population model and assuming that the selection operator is based on the rank of the solutions, we group optimization problems according to the behavior of the EDA. Throughout the definition of an equivalence relation between functions it is possible to partition the space of problems in equivalence classes in which the algorithm has the same behavior. We show that only the probabilistic model is able to generate different partitions of the set of possible problems and hence, it predetermines the number of different behaviors that the algorithm can exhibit. As a natural consequence of our definitions, all the objective functions are in the same equivalence class when the algorithm does not impose restrictions to the probabilistic model. The taxonomy of problems, which is also valid for finite populations, is studied in depth for a simple EDA that considers independence among the variables of the problem. We provide the sufficient and necessary condition to decide the equivalence between functions and then we develop the operators to describe and count the members of a class. In addition, we show the intrinsic relation between univariate EDAs and the neighborhood system induced by the Hamming distance by proving that all the functions in the same class have the same number of local optima and that they are in the same ranking positions. Finally, we carry out numerical simulations in order to analyze the different behaviors that the algorithm can exhibit for the functions defined over the search space [Formula: see text].


2009 ◽  
Vol 48 (03) ◽  
pp. 236-241 ◽  
Author(s):  
V. Robles ◽  
P. Larrañaga ◽  
C. Bielza

Summary Objectives: The “large k (genes), small N (samples)” phenomenon complicates the problem of microarray classification with logistic regression. The indeterminacy of the maximum likelihood solutions, multicollinearity of predictor variables and data over-fitting cause unstable parameter estimates. Moreover, computational problems arise due to the large number of predictor (genes) variables. Regularized logistic regression excels as a solution. However, the difficulties found here involve an objective function hard to be optimized from a mathematical viewpoint and a careful required tuning of the regularization parameters. Methods: Those difficulties are tackled by introducing a new way of regularizing the logistic regression. Estimation of distribution algorithms (EDAs), a kind of evolutionary algorithms, emerge as natural regularizers. Obtaining the regularized estimates of the logistic classifier amounts to maximizing the likelihood function via our EDA, without having to be penalized. Likelihood penalties add a number of difficulties to the resulting optimization problems, which vanish in our case. Simulation of new estimates during the evolutionary process of EDAs is performed in such a way that guarantees their shrinkage while maintaining their probabilistic dependence relationships learnt. The EDA process is embedded in an adapted recursive feature elimination procedure, thereby providing the genes that are best markers for the classification. Results: The consistency with the literature and excellent classification performance achieved with our algorithm are illustrated on four microarray data sets: Breast, Colon, Leukemia and Prostate. Details on the last two data sets are available as supplementary material. Conclusions: We have introduced a novel EDA-based logistic regression regularizer. It implicitly shrinks the coefficients during EDA evolution process while optimizing the usual likelihood function. The approach is combined with a gene subset selection procedure and automatically tunes the required parameters. Empirical results on microarray data sets provide sparse models with confirmed genes and performing better in classification than other competing regularized methods.


2011 ◽  
Vol 19 (2) ◽  
pp. 225-248 ◽  
Author(s):  
Reza Rastegar

In this paper we obtain bounds on the probability of convergence to the optimal solution for the compact genetic algorithm (cGA) and the population based incremental learning (PBIL). Moreover, we give a sufficient condition for convergence of these algorithms to the optimal solution and compute a range of possible values for algorithm parameters at which there is convergence to the optimal solution with a predefined confidence level.


2005 ◽  
Vol 13 (1) ◽  
pp. 43-66 ◽  
Author(s):  
J. M. Peña ◽  
J. A. Lozano ◽  
P. Larrañaga

Many optimization problems are what can be called globally multimodal, i.e., they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms, making their effectiveness and efficiency degrade, due to genetic drift. With the aim of overcoming these drawbacks for discrete globally multimodal problem optimization, this paper introduces and evaluates a new estimation of distribution algorithm based on unsupervised learning of Bayesian networks. We report the satisfactory results of our experiments with symmetrical binary optimization problems.


Sign in / Sign up

Export Citation Format

Share Document