scholarly journals Hierarchical System Decomposition Using Genetic Algorithm for Future Sustainable Computing

2020 ◽  
Vol 12 (6) ◽  
pp. 2177
Author(s):  
Jun-Ho Huh ◽  
Jimin Hwa ◽  
Yeong-Seok Seo

A Hierarchical Subsystem Decomposition (HSD) is of great help in understanding large-scale software systems from the software architecture level. However, due to the lack of software architecture management, HSD documentations are often outdated, or they disappear in the course of repeated changes of a software system. Thus, in this paper, we propose a new approach for recovering HSD according to the intended design criteria based on a genetic algorithm to find an optimal solution. Experiments are performed to evaluate the proposed approach using two open source software systems with the 14 fitness functions of the genetic algorithm (GA). The HSDs recovered by our approach have different structural characteristics according to objectives. In the analysis on our GA operators, crossover contributes to a relatively large improvement in the early phase of a search. Mutation renders small-scale improvement in the whole search. Our GA is compared with a Hill-Climbing algorithm (HC) implemented by our GA operators. Although it is still in the primitive stage, our GA leads to higher-quality HSDs than HC. The experimental results indicate that the proposed approach delivers better performance than the existing approach.

2010 ◽  
Vol 44-47 ◽  
pp. 743-747 ◽  
Author(s):  
Qun Ming Li ◽  
Qing Hua Qin ◽  
Shi Wei Zhang ◽  
Hua Deng

This paper analyzes three typical mechanisms of heavy forging robot grippers: pulling with a sliding block including short- and long-leveraged grippers and pushing leveraged grippers, and uses multi-objective evolutionary genetic algorithm to design the optimal forging robot grippers. The decision variables are defined according to the geometrical dimensions of the heavy grippers, and four objective functions are defined according to gripping forces and force transmission relationships between the joints, and the constraints are yielded by the physical conditions and the structure of the grippers. Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) is used to solve the optimization problem. Normalized weighting objective functions are used to select the best optimal solution from Pareto optimal fronts. The Pareto fronts and optimal results are compared and analyzed. An optimal model of forging robot gripper is designed. The results show the effectiveness of the optimal design. Based on similarity theory, optimum dimensions from small scale forging grippers to large scale ones can be designed, and from model to prototype experiment to test the physical features is possible.


Author(s):  
Bernard K.S. Cheung

Genetic algorithms have been applied in solving various types of large-scale, NP-hard optimization problems. Many researchers have been investigating its global convergence properties using Schema Theory, Markov Chain, etc. A more realistic approach, however, is to estimate the probability of success in finding the global optimal solution within a prescribed number of generations under some function landscapes. Further investigation reveals that its inherent weaknesses that affect its performance can be remedied, while its efficiency can be significantly enhanced through the design of an adaptive scheme that integrates the crossover, mutation and selection operations. The advance of Information Technology and the extensive corporate globalization create great challenges for the solution of modern supply chain models that become more and more complex and size formidable. Meta-heuristic methods have to be employed to obtain near optimal solutions. Recently, a genetic algorithm has been reported to solve these problems satisfactorily and there are reasons for this.


2012 ◽  
pp. 201-222
Author(s):  
Yujian Fu ◽  
Zhijang Dong ◽  
Xudong He

The approach aims at solving the above problems by including the analysis and verification of two different levels of software development process–design level and implementation level-and bridging the gap between software architecture analysis and verification and the software product. In the architecture design level, to make sure the design correctness and attack the large scale of complex systems, the compositional verification is used by dividing and verifying each component individually and synthesizing them based on the driving theory. Then for those properties that cannot be verified on the design level, the design model is translated to implementation and runtime verification technique is adapted to the program. This approach can highly reduce the work on the design verification and avoid the state-explosion problem using model checking. Moreover, this approach can ensure both design and implementation correctness, and can further provide a high confident final software product. This approach is based on Software Architecture Model (SAM) that was proposed by Florida International University in 1999. SAM is a formal specification and built on the pair of component-connector with two formalisms – Petri nets and temporal logic. The ACV approach places strong demands on an organization to articulate those quality attributes of primary importance. It also requires a selection of benchmark combination points with which to verify integrated properties. The purpose of the ACV is not to commend particular architectures, but to provide a method for verification and analysis of large scale software systems in architecture level. The future research works fall in two directions. In the compositional verification of SAM model, it is possible that there is circular waiting of certain data among different component and connectors. This problem was not discussed in the current work. The translation of SAM to implementation is based on the restricted Petri nets due to the undecidable issue of high level Petri nets. In the runtime analysis of implementation, extraction of the execution trace of the program is still needed to get a white box view, and further analysis of execution can provide more information of the product correctness.


2015 ◽  
Vol 713-715 ◽  
pp. 1579-1582
Author(s):  
Shao Min Zhang ◽  
Ze Wu ◽  
Bao Yi Wang

Under the background of huge amounts of data in large-scale power grid, the active power optimization calculation is easy to fall into local optimal solution, and meanwhile the calculation demands a higher processing speed. Aiming at these questions, the farmer fishing algorithm which is applied to solve the problem of optimal distribution of active load for coal-fired power units is used to improve the cloud adaptive genetic algorithm (CAGA) for speeding up the convergence phase of CAGA. The concept of cloud computing algorithm is introduced, and parallel design has been done through MapReduce graphs. This method speeds up the calculation and improves the effectiveness of the active load optimization allocation calculation.


2014 ◽  
Vol 12 (03) ◽  
pp. 1430002 ◽  
Author(s):  
Eliahu Cohen ◽  
Boaz Tamir

On May 2011, D-Wave Systems Inc. announced "D-Wave One", as "the world's first commercially available quantum computer". No wonder this adiabatic quantum computer based on 128-qubit chip-set provoked an immediate controversy. Over the last 40 years, quantum computation has been a very promising yet challenging research area, facing major difficulties producing a large scale quantum computer. Today, after Google has purchased "D-Wave Two" containing 512 qubits, criticism has only increased. In this work, we examine the theory underlying the D-Wave, seeking to shed some light on this intriguing quantum computer. Starting from classical algorithms such as Metropolis algorithm, genetic algorithm (GA), hill climbing and simulated annealing, we continue to adiabatic computation and quantum annealing towards better understanding of the D-Wave mechanism. Finally, we outline some applications within the fields of information and image processing. In addition, we suggest a few related theoretical ideas and hypotheses.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
I. Hameem Shanavas ◽  
R. K. Gnanamurthy

In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.


Author(s):  
P. K. KAPUR ◽  
ANU. G. AGGARWAL ◽  
KANICA KAPOOR ◽  
GURJEET KAUR

The demand for complex and large-scale software systems is increasing rapidly. Therefore, the development of high-quality, reliable and low cost computer software has become critical issue in the enormous worldwide computer technology market. For developing these large and complex software small and independent modules are integrated which are tested independently during module testing phase of software development. In the process, testing resources such as time, testing personnel etc. are used. These resources are not infinitely large. Consequently, it is an important matter for the project manager to allocate these limited resources among the modules optimally during the testing process. Another major concern in software development is the cost. It is in fact, profit to the management if the cost of the software is less while meeting the costumer requirements. In this paper, we investigate an optimal resource allocation problem of minimizing the cost of software testing under limited amount of available resources, given a reliability constraint. To solve the optimization problem we present genetic algorithm which stands up as a powerful tool for solving search and optimization problems. The key objective of using genetic algorithm in the field of software reliability is its capability to give optimal results through learning from historical data. One numerical example has been discussed to illustrate the applicability of the approach.


Forests ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 26
Author(s):  
Yutu Yang ◽  
Zilong Zhuang ◽  
Yabin Yu

Defects on a solid wood board have a great influence on the aesthetics and mechanical properties of the board. After removing the defects, the board is no longer the standard size; manual drawing lines and cutting procedure is time-consuming and laborious; and an optimal solution is not necessarily obtained. Intelligent cutting of the board can be realized using a genetic algorithm. However, the global optimal solution of the whole machining process cannot be obtained by separately considering the sawing and splicing of raw materials. The integrated consideration of wood board cutting and board splicing can improve the utilization rate of the solid wood board. The effective utilization rate of the board with isolated consideration of raw material sawing with standardized dimensions of wood pieces and board splicing is 79.1%, while the shortcut splicing optimization with non-standardized dimensions for the final board has a utilization rate of 88.6% (which improves the utilization rate by 9.5%). In large-scale planning, the use of shortcut splicing optimization also increased the utilization rate by 12.14%. This has certain guiding significance for actual production.


2012 ◽  
Vol 151 ◽  
pp. 139-144
Author(s):  
Jian Xi Yang ◽  
Li Wen Zhang

This paper uses of the dual structure of coded genetic algorithm to optimize the sensor placement methods. The method using the optimal preservation strategy using the adaptive part of the cross, overcomes deficiencies of computer applying to the lengthy large-scale structure data, storage space, and to ensure that the optimal solution search. Finally, through the analysis of a continuous rigid frame bridge Project, proved that the method superior to the effective independent method in the search capability, computational efficiency and reliability, but still need to further improve the speed of convergence.


2011 ◽  
Vol 48-49 ◽  
pp. 25-28
Author(s):  
Wei Jian Ren ◽  
Yuan Jun Qi ◽  
Wei Lv ◽  
Cheng Da Li

According to the phenomenon of falling into local optimum during solving large-scale optimization problems and the shortcomings of poor convergence of Immune Genetic Algorithm, a new kind of probability selection method based on the concentration for the genetic operation is presented. Considering the features of chaos optimization method, such like not requiring the solved problems with continuity or differentiability, which is unlike the conventional method, and also with a solving process within a certain range traverse in order to find the global optimal solution, a kind of Chaos Immune Genetic Algorithm based on Logistic map and Hénon map is proposed. Through the application to TSP problem, the results have showed the superior to other algorithms.


Sign in / Sign up

Export Citation Format

Share Document