Evolutionary Algorithms in Theory and Practice
Latest Publications


TOTAL DOCUMENTS

9
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195099713, 9780197560921

Author(s):  
Thomas Bäck

In section 1.1.3 it was clarified that a variety of different, more or less drastic changes of the genome are summarized under the term mutation by geneticists and evolutionary biologists. Several mutation events are within the bounds of possibility, ranging from single base pair changes to genomic mutations. The phenotypic effect of genotypic mutations, however, can hardly be predicted from knowledge about the genotypic change. In general, advantageous mutations have a relatively small effect on the phenotype, i.e., their expression does not deviate very much (in phenotype space) from the expression of the unmutated genotype ([Fut90], p. 85). More drastic phenotypic changes are usually lethal or become extinct due to a reduced capability of reproduction. The discussion, to which extent evolution based on phenotypic macro-mutations in the sense of “hopeful monsters” is important to facilitate the process of speciation, is still ongoing (such macromutations have been observed and classified for the fruitfly Drosophila melangonaster, see [Got89], p. 286). Actually, only a few data sets are available to assess the phylogenetic significance of macro-mutations completely, but small phenotypical effects of mutation are clearly observed to be predominant. This is the main argument justifying the use of normally distributed mutations with expectation zero in Evolutionary Programming and Evolution Strategies. It reflects the emphasis of both algorithms on modeling phenotypic rather than genotypic change. The model of mutation is quite different in Genetic Algorithms, where bit reversal events (see section 2.3.2) corresponding with single base pair mutations in biological reality implement a model of evolution on the basis of genotypic changes. As observed in nature, the mutation rate used in Genetic Algorithms is very small (cf. section 2.3.2). In contrast to the biological model, it is neither variable by external influences nor controlled (at least partially) by the genotype itself (cf. section 1.1.3). Holland defined the role of mutation in Genetic Algorithms to be a secondary one, of little importance in comparison to crossover (see [Hol75], p. 111): . . . Summing up: Mutation is a “background” operator, assuring that the crossover operator has a full range of alleles so that the adaptive plan is not trapped on local optima. . . .


Author(s):  
Thomas Bäck

The genetic operators summarized in the set Ω, i.e. mutation and recombination (and probably others, e.g. inversion) create new individuals in a completely undirected way. In Evolutionary Algorithms, the selection operator plays a major role by imposing a direction on the search process, i.e. a clear preference of those individuals which perform better according to the fitness measure Φ. Selection is the only component of Evolutionary Algorithms where the fitness of individuals has an impact on the evolution process. The practical implementations of selection as discussed in sections 2.1.4, 2.2.4, and 2.3.4 seemingly contradict the biological viewpoint presented in section 1.1, where natural selection was emphasized not to be an active force but instead to be characterized by different survival and reproduction rates. However, artificial implementation models and biological reality are not necessarily contradicting each other. While in biological systems fitness can only be measured indirectly by differences in growth rates, fitness in Evolutionary Algorithms is a direct, well-defined and evaluable property of individuals. The biological struggle for existence (e.g. by predator-prey interactions, capabilities of somatic adaptation, and the particular physical properties of individuals) has no counterpart in computer implementations of standard Evolutionary Algorithms. Therefore, an artificial abstraction of these mechanisms can use fitness measures to determine survival and reproduction a posteriori, since the struggle for existence is completely hidden in the evaluation process of individuals. The fact that different survival and reproduction constitute selection is valid in both cases, but in Evolutionary Algorithms fitness is measurable and implies the survival and reproduction behavior, which is just opposite to biological reality. This is simply an implication of the fitness-centered intention which necessarily prevails design and application of these algorithms. Therefore, it is just a logic consequence to model selection as an active, fitness-based component of Evolutionary Algorithms. However, how to model selection is by no means a simple problem. In evolutionary biology, it is usually distinguished between stabilizing, directed, and disruptive selection (see [Fut90], pp. 174–175). In the case of stabilizing selection, intermediate phenotypes have best fitness values, while disruptive selection is characterized by two or more distinct phenotypes that are highly fit and by intermediate phenotypes of low fitness (this assumes an - albeit unknown - ordering of phenotypes).


Author(s):  
Thomas Bäck

Given the discussions about Evolutionary Algorithms from the previous chapters, we shall now apply them to the artificial topologies just presented. This will be done by simply running the algorithms in their standard forms (according to the definitions of standard forms as given in sections 2.1.6, 2.2.6, and 2.3.6) for a reasonable number of function evaluations on these problems. The experiment compares an algorithm that self-adapts n standard deviations and uses recombination (the Evolution Strategy), an algorithm that self-adapts n standard deviations and renounces recombination (meta-Evolutionary Programming), and an algorithm that renounces self-adaptation but stresses the role of recombination (the Genetic Algorithm). Furthermore, all algorithms rely on different selection mechanisms. With respect to the level of self-adaptation, the choice of the Evolution Strategy and Evolutionary Programming variants is fair, while the Genetic Algorithm leaves us no choice (i.e., no self-adaptation mechanism is used within the standard Genetic Algorithm). Concerning the population size the number of offspring individuals (λ) is adjusted to a common value of λ = 100 in order to achieve comparability of population sizes while at the same time limiting the computational requirements to a justifiable amount. This results in the following three algorithmic instances that are compared here (using the standard notation introduced in chapter 2): • ES(n,0,rdI, s(15,100)): An Evolution Strategy that self-adapts n standard deviations but does not use correlated mutations. Recombination is discrete on object variables and global intermediate on standard deviations, and the algorithm uses a (15,100)-selection mechanism. • mEP(6,10,100): A meta-Evolutionary Programming algorithm that — by default — self-adapts n variances and controls mutation of variances by a meta-parameter ζ = 6. The tournament size for selection and the population size amount to q = 10 and μ = 100, respectively. • GA(30,0.001,r{0.6, 2}, 5,100): A Genetic Algorithm that evolves a population of μ = 100 bitstrings of length l = 30 • n, each. The scaling window size for linear dynamic scaling is set to ω = 5. Proportional selection, a two-point crossover operator with application rate 0.6 and a mutation operator with bit-reversal probability 1.0·10−3 complete the algorithm.


Author(s):  
Thomas Bäck

Within the total of seven chapters presented here on the topic of Evolutionary Algorithms there are a number of crude but nevertheless powerful simplifications of the model of organic evolution. Chapter 1 clarified that Evolutionary Algorithms, if they are interpreted as global optimization algorithms, are not to be confused with the oversimplified concept of uniform random search. Instead, they rely on keeping the history insofar as the subsequent generation is created at each step of the evolution process from the current generation maintained by the algorithm. In other words: Evolutionary Algorithms are representatives of the mathematical concept of a Markov process (respectively chain, in discrete spaces). Concerning the convergence reliability of Evolutionary Algorithms, the theoretical property of global convergence with a probability of one holds for all variants that use an elitist selection method and guarantee a reachability property of mutation which is basically assured by working with nonzero mutation rate in Genetic Algorithms (respectively with nonzero standard deviation in Evolution Strategies and Evolutionary Programming). These results are summarized in theorem 7, 10, and 13, which are based on the general convergence theorem 3 for global random search algorithms respectively on well-known results on absorbing Markov chains. In contrast to convergence reliability investigations where the focus is on the explorative character of the search, convergence velocity analysis emphasizes the exploitation of information collected about a promising region or point in the search space. Both properties are contradictory and cause a trade-off that dominates behavior and control of Evolutionary Algorithms. Nevertheless, convergence velocity investigations so far were known only for Evolution Strategies (see section 2.1.7) but can easily be transferred to standard Evolutionary Programming as demonstrated in section 2.2.7. The result provides a clear indication that the step-size control of standard EP is useless for even moderately large dimensions of the search space and for objective functions that possess a non-vanishing global optimum. More advanced versions of Evolutionary Programming overcome this problem by a self-adaptive control of strategy parameters quite similar to the technique used in Evolution Strategies.


Author(s):  
Thomas Bäck

In this chapter, an outline of an Evolutionary Algorithm is formulated that is sufficiently general to cover at least the three different main stream algorithms mentioned before, namely, Evolution Strategies, Genetic Algorithms, and Evolutionary Programming. As in the previous chapter, algorithms are formulated in a language obtained by mixing pseudocode and mathematical notations, thus allowing for a high-level description which concentrates on the main components. These are: A population of individuals which is manipulated by genetic operators — especially mutation and recombination, but others may also be incorporated — and undergoes a fitness-based selection process, where fitness of an individual depends on its quality with respect to the optimization task.


Author(s):  
Thomas Bäck

The conversation between Alice and the Cat gives a perfect characterization of the meandering path full of dead ends, sharp curves and hurdles one has to follow when doing research. After three and a half years, my first section of this path through wonderland ends up with the work presented here. In its final form, it deals with Evolutionary Algorithms (for parameter optimization purposes) and puts particular emphasis on extensions and analysis of Genetic Algorithms, a special instance of this class of algorithms. The structure of this research, however, has grown over the years and is just slightly related to Classifier Systems, the original starting point of my work. These contain Genetic Algorithms as a component for rule-discovery, and as Classifier Systems turned out to lack theoretical understanding almost completely, the concentration of interest on Genetic Algorithms was a natural step and provided the basis of this work. The book is divided into two parts that reflect the emphasis on Genetic Algorithms (part II) and the general framework of Evolutionary Algorithms that Genetic Algorithms fit into (part I). Part I concentrates on the development of a general description of Evolutionary Algorithms, i.e. search algorithms gleaned from organic evolution. These algorithms were developed more than thirty years ago in the “ancient” times of computer science, when researchers came up with the ideas to solve problems by trying to imitate the intelligent capabilities of individual brains and populations. The former approach, emphasizing an individual’s intelligence, led to the development of research topics such as artificial neural networks and knowledge-based symbolic artificial intelligence. The latter emphasized the collective learning properties exhibited by populations of individuals, which benefit from a high diversity of their genetic material. Modeling organic evolution provides the basis for a variety of concepts such as genotype, genetic code, phenotype, self-adaptation, etc., which are incorporated into Evolutionary Algorithms. Consequently, the necessary prerequisites to understand the relations between algorithmic realizations and biological reality are provided in chapter 1. In addition to this, chapter 1 clarifies the relationship between global random search algorithms and Evolutionary Algorithms, Artificial Intelligence and Evolutionary Algorithms, and computational complexity and Evolutionary Algorithms.


Author(s):  
Thomas Bäck

Evolutionary Algorithms (EAs), the topic of this work, is an interdisciplinary research field with a relationship to biology, Artificial Intelligence, numerical optimization, and decision support in almost any engineering discipline. Therefore, an attempt to cover at least some of these relations must necessarily result in several introductory pages, always having in mind that it hardly can be complete. This is the reason for a rather voluminous introduction to the fundamentals of Evolutionary Algorithms in section 1.1 without giving any practically useful description of the algorithms now. At the moment, it is sufficient to know that these algorithms are based on models of organic evolution, i.e., nature is the source of inspiration. They model the collective learning process within a population of individuals, each of which represents not only a search point in the space of potential solutions to a given problem, but also may be a temporal container of current knowledge about the “laws” of the environment. The starting population is initialized by an algorithm-dependent method, and evolves towards successively better regions of the search space by means of (more or less) randomized processes of recombination, mutation, and selection. The environment delivers a quality information (fitness value) for new search points, and the selection process favors those individuals of higher quality to reproduce more often than worse individuals. The recombination mechanism allows for mixing of parental information while passing it to their descendants, and mutation introduces innovation into the population. This process is currently used by three different mainstreams of Evolutionary Algorithms, i.e. Evolution Strategies (ESs), Genetic Algorithms (GAs), and Evolutionary Programming (EP), details of which are presented in chapter 2. This chapter presents their biological background in order to have the necessary understanding of the basic natural processes (section 1.1). Evolutionary Algorithms are then discussed with respect to their impact on Artificial Intelligence and, at the same time, their interpretation as a technique for machine learning (section 1.2). Furthermore, their interpretation as a global optimization technique and the basic mathematical terminology as well as some convergence results on random search algorithms as far as they are useful for Evolutionary Algorithms are presented in section 1.3.


Author(s):  
Thomas Bäck

So far, the basic knowledge about setting up the parameters of Evolutionary Algorithms stems from a lot of empirical work and few theoretical results. The standard guidelines for parameters such as crossover rate, mutation probability, and population size as well as the standard settings of the recombination operator and selection mechanism were presented in chapter 2 for the Evolutionary Algorithms. In the case of Evolution Strategies and Evolutionary Programming, the self-adaptation mechanism for strategy parameters solves this parameterization problem in an elegant way, while for Genetic Algorithms no such technique is employed. Chapter 6 served to identify a reasonable choice of the mutation rate, but no theoretically confirmed knowledge about the choice of the crossover rate and the crossover operator is available. With respect to the optimal population size for Genetic Algorithms, Goldberg presented some theoretical arguments based on maximizing the number of schemata processed by the algorithm within fixed time, arriving at an optimal size λ* = 3 for serial implementations and extremely small string length [Gol89b]. However, as indicated in section 2.3.7 and chapter 6, it is by no means clear whether the schema processing point of view is appropriately preferred to the convergence velocity investigations presented in section 2.1.7 and chapter 6. As pointed out several times, we prefer the point of view which concentrates on a convergence velocity analysis. Consequently, the search for useful parameter settings of a Genetic Algorithm constitutes an optimization problem by itself, leading to the idea of using an Evolutionary Algorithm on a higher level to evolve optimal parameter settings of Genetic Algorithms. Due to the existence of two logically different levels in such an approach, it is reasonable to call it a meta-evolutionary algorithm. By concentrating on meta-evolution in this chapter, we will radically deviate from the biological model, where no two-level evolution process is to be observed but the self-adaptation principle can well be identified (as argued in chapter 2). However, there are several reasons why meta-evolution promises to yield some helpful insight into the working principles of Evolutionary Algorithms: First, meta-evolution provides the possibility to test whether the basic heuristic and the theoretical knowledge about parameterizations of Genetic Algorithms is also evolvable by the experimental approach, thus allowing us to confirm the heuristics or to point at alternatives.


Author(s):  
Thomas Bäck

In order to facilitate an empirical comparison of the performance of Evolution Strategies, Evolutionary Programming, and Genetic Algorithms, a test environment for these algorithms must be provided in the form of several objective functions f : IRn → IR. Finding an appropriate and representative set of test problems is not an easy task, since any particular combination of properties represented by a test function does not allow for generalized performance statements. However, there is evidence from a vast number of applications that Evolutionary Algorithms are robust in the sense that they give reasonable performance over a wide range of different topologies. Here, a set of test functions that are completely artificial and simple is used, i.e., they are stated in a closed, analytical form and have no direct background from any practical application. Instead, they allow for a detailed analysis of certain special characteristics of the topology, e.g. unimodality or multimodality, continuous or discontinuous cases, and others. If any prediction is drawn up for the behavior of Evolutionary Algorithms depending on such strong topological characteristics, the appropriate idealized test function provides a good instrument to test such hypotheses. Furthermore, since many known test sets have some functions in common, at least a minimal level of comparability of results is often guaranteed. Finally, before we can expect an algorithm to be successful in the case of hard problems, it has to demonstrate that it does not fail to work on simple problems. On the other hand, the (public relations) effect of using artificial topologies is vanishingly small, since the test functions used are of no industrial relevance. This way, researchers working with such test functions can never rest on their industrial laurels. A more legitimate objection against artificial topologies may be that they are possibly not representative of the “average complexity” of real-world problems, and that some regularity features of their topology may inadmissibly speed up the search. However, most test function sets incorporate even multimodal functions of remarkable complexity, such that only the regularity argument counts against using an artifical function set.


Sign in / Sign up

Export Citation Format

Share Document