scholarly journals Large-scale parallelization of partial evaluations in evolutionary algorithms for real-world problems

Author(s):  
Anton Bouter ◽  
Tanja Alderliesten ◽  
Arjan Bel ◽  
Cees Witteveen ◽  
Peter A. N. Bosman
Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 116
Author(s):  
Junhua Ku ◽  
Fei Ming ◽  
Wenyin Gong

In the real-world, symmetry or asymmetry widely exists in various problems. Some of them can be formulated as constrained multi-objective optimization problems (CMOPs). During the past few years, handling CMOPs by evolutionary algorithms has become more popular. Lots of constrained multi-objective optimization evolutionary algorithms (CMOEAs) have been proposed. Whereas different CMOEAs may be more suitable for different CMOPs, it is difficult to choose the best one for a CMOP at hand. In this paper, we propose an ensemble framework of CMOEAs that aims to achieve better versatility on handling diverse CMOPs. In the proposed framework, the hypervolume indicator is used to evaluate the performance of CMOEAs, and a decreasing mechanism is devised to delete the poorly performed CMOEAs and to gradually determine the most suitable CMOEA. A new CMOEA, namely ECMOEA, is developed based on the framework and three state-of-the-art CMOEAs. Experimental results on five benchmarks with totally 52 instances demonstrate the effectiveness of our approach. In addition, the superiority of ECMOEA is verified through comparisons to seven state-of-the-art CMOEAs. Moreover, the effectiveness of ECMOEA on the real-world problems is also evaluated for eight instances.


Author(s):  
Timothy Ganesan ◽  
Pandian Vasant ◽  
Igor Litvinchev ◽  
Mohd Shiraz Aris

The increasing complexity of engineering systems has spurred the development of highly efficient optimization techniques. This chapter focuses on two novel optimization methodologies: extreme value stochastic engines (random number generators) and the coupled map lattice (CML). This chapter proposes the incorporation of extreme value distributions into stochastic engines of conventional metaheuristics and the implementation of CMLs to improve the overall optimization. The central idea is to propose approaches for dealing with highly complex, large-scale multi-objective (MO) problems. In this work the differential evolution (DE) approach was employed (incorporated with the extreme value stochastic engine) while the CML was employed independently (as an analogue to evolutionary algorithms). The techniques were then applied to optimize a real-world MO Gas Turbine-Absorption Chiller system. Comparative analyses among the conventional DE approach (Gauss-DE), extreme value DE strategies, and the CML were carried out.


Author(s):  
Qi Wang ◽  
Miaoting Guan ◽  
Wen Huang ◽  
Libing Wang ◽  
Zhihong Wang ◽  
...  

Abstract Applications of evolutionary algorithms (EAs) to real-world problems are usually hindered due to parameterisation issues and computational efficiency. This paper shows how the combinatorial effects related to the parameterisation issues of EAs can be visualised and extracted by the so-called compass plot. This new plot is inspired by the traditional Chinese compass used for navigation and geomantic detection. We demonstrate the value of the proposed compass plot in two scenarios with application to the optimal design of the Hanoi water distribution system. One is to identify the dominant parameters in the well-known NSGA-II. The other is to seek the efficient combinations of search operators embedded in Borg, which uses an ensemble of search operators by auto-adapting their use at runtime to fit an optimisation problem. As such, the implicit and vital interdependency among parameters and search operators can be intuitively demonstrated and identified. In particular, the compass plot revealed some counter-intuitive relationships among the algorithm parameters that led to a considerable change in performance. The information extracted, in turn, facilitates a deeper understanding of EAs and better practices for real-world cases, which eventually leads to more cost-effective decision-making.


2009 ◽  
pp. 131-142
Author(s):  
Thomas E. Potok ◽  
Xiaohui Cui ◽  
Yu Jiao

The rate at which information overwhelms humans is significantly more than the rate at which humans have learned to process, analyze, and leverage this information. To overcome this challenge, new methods of computing must be formulated, and scientist and engineers have looked to nature for inspiration in developing these new methods. Consequently, evolutionary computing has emerged as new paradigm for computing, and has rapidly demonstrated its ability to solve real-world problems where traditional techniques have failed. This field of work has now become quite broad and encompasses areas ranging from artificial life to neural networks. This chapter specifically focuses on two sub-areas of nature-inspired computing: Evolutionary Algorithms and Swarm Intelligence.


2017 ◽  
Vol 2017 (3) ◽  
pp. 147-167 ◽  
Author(s):  
Gilad Asharov ◽  
Daniel Demmler ◽  
Michael Schapira ◽  
Thomas Schneider ◽  
Gil Segev ◽  
...  

Abstract The Border Gateway Protocol (BGP) computes routes between the organizational networks that make up today’s Internet. Unfortunately, BGP suffers from deficiencies, including slow convergence, security problems, a lack of innovation, and the leakage of sensitive information about domains’ routing preferences. To overcome some of these problems, we revisit the idea of centralizing and using secure multi-party computation (MPC) for interdomain routing which was proposed by Gupta et al. (ACM HotNets’12). We implement two algorithms for interdomain routing with state-of-the-art MPC protocols. On an empirically derived dataset that approximates the topology of today’s Internet (55 809 nodes), our protocols take as little as 6 s of topology-independent precomputation and only 3 s of online time. We show, moreover, that when our MPC approach is applied at country/region-level scale, runtimes can be as low as 0.17 s online time and 0.20 s pre-computation time. Our results motivate the MPC approach for interdomain routing and furthermore demonstrate that current MPC techniques are capable of efficiently tackling real-world problems at a large scale.


1998 ◽  
Vol 1 (05) ◽  
pp. 400-407 ◽  
Author(s):  
G.S. Shiralkar ◽  
R.E. Stephenson ◽  
Wayne Joubert ◽  
Olaf Lubeck ◽  
Bart van Bloemen Waanders

This paper (SPE 51969) was revised for publication from paper SPE 37975, first presented at the 1997 SPE Reservoir Simulation Symposium, Dallas, 8-11 June. Original manuscript received for review 30 June 1997. Revised manuscript received 30 March 1998. Paper peer approved 6 July 1998. Summary We describe a new production model, Falcon, that has achieved speeds on parallel computers that are 100 times faster on real world problems than current production models on a vector computer. Falcon has been used to conduct the largest, geostatistical reservoir study ever conducted within Amoco. In this paper we discuss the following: Falcon's data parallel paradigm with FORTRAN 90 and high performance FORTRAN (HPF); its single program, multiple data (SPMD) paradigm with message passing; efficient memory management that enables simulation of enormous studies; a numerical formulation that reconciles the generalized compositional approach (based on component masses and pressure) with earlier approaches (based on pressures and saturations), in a more general and more efficient approach. We also discuss Falcon's scalability up to 512 processor nodes and performance (timings and memory) achieved on a number of parallel platforms, including Cray Research's T3D and T3E, SGI's Power Challenge and Origin 2000, Thinking Machines' CM5, and IBM's SP2. Falcon also runs on single processor computers such as PC's and IBM's RS6000. We discuss a new parallel linear solver technology based on a fully parallel scalable implementation of incomplete lower-upper (ILU) preconditioning coupled with a GMRES or Orthomin iteration process. This naturally ordered global ILU preconditioner is scalable to hundreds of processors, efficiently solving the matrix problems arising from large scale simulations. The use of the techniques described in this paper has enabled us to run problem sizes of up to 16.5 million gridblocks. Falcon was used to simulate fifty geostatistically derived realizations of a large, black oil waterflood system. The realizations, each with 2.3 million cells and 1,039 wells, took an average of 4.2 hours to execute on a 128-node CM5 computer, thus enabling the simulation study to finish in less than a month. In this field study, we bypassed upscaling through the use of fine vertical resolution gridding. Our focus has been on the applicability of Falcon to real world problems. Falcon can be used for modeling both small and very large reservoirs, including reservoirs characterized by geostatistics. It can be used to simulate black oil, gas/water, and dry gas reservoirs. And, a fully compositional feature is being developed. P. 400


Sign in / Sign up

Export Citation Format

Share Document