Surrogate Model Updating Using Clustering in a Genetic Algorithm Setup

Author(s):  
Tiefu Shao ◽  
Sundar Krishnamurty

This paper addresses the critical issue of fidelity in simulation-based design optimization using preference-based surrogate models. Specifically, it presents an integrated clustering-based updating procedure in a genetic algorithm setup to iteratively improve the efficacy of Kriging models. A potential drawback of using preference-based surrogate models in simulation based design is that the surrogates may misrepresent the true optima if the model building schemes fail to capture the critical points of interest with enough fidelity or clarity. This work addresses this vulnerability and presents an efficient clustering-technique integrated surrogate model updating procedure that can capture the buried, transient, yet inherent data pattern in the evolution progression of design candidates within a genetic algorithm setup, and screen out distinct optimal points for subsequent sequential model validation and updating. The results show that the successful finding of the true optimal design through cost-effective surrogate-based optimization depends not only on the selection of sampling schemes such as sample rate and distribution in the initial surrogate model build-up, but also on an efficient and reliable updating procedure that can prevent suboptimal decisions.

Author(s):  
Tiefu Shao ◽  
Sundar Krishnamurthy

This paper addresses the critical issue of effectiveness, efficiency, and reliability in simulation-based design optimization under surrogate model uncertainty. Specifically, it presents a novel method to build surrogate models iteratively with sufficient fidelity for accurately capturing global optimal design solutions at a minimal cost. The salient feature of the proposed method lies in its unique preference of focusing necessarily high fidelity at potential global optimal regions of surrogate models. The proposed method is the synergic integration of the multiple preference point method, which updates surrogate model at current local optimal points predicted with data-mining techniques in genetic algorithm setup, and the maximum variance point method, which updates surrogate model at the point associated with the maximum prediction variance. Through illustrative comparison studies on thirty different optimization scenarios derived from 15 different test functions, the proposed method demonstrates the tangible reliability advancement. The experimental results indicate that the proposed method can be a reliable updating method in surrogate-model-based design optimization for efficiently locating the global optimal point/points in various kinds of optimization scenarios featured by single/multiple global optimal point/points that may exist at the corners of design space, inside design space, or on the boundaries of design space.


2008 ◽  
Vol 130 (4) ◽  
Author(s):  
Tiefu Shao ◽  
Sundar Krishnamurty

This paper addresses the critical issue of effectiveness and efficiency in simulation-based optimization using surrogate models as predictive models in engineering design. Specifically, it presents a novel clustering-based multilocation search (CMLS) procedure to iteratively improve the fidelity and efficacy of Kriging models in the context of design decisions. The application of this approach will overcome the potential drawback in surrogate-model-based design optimization, namely, the use of surrogate models may result in suboptimal solutions due to the possible smoothing out of the global optimal point if the sampling scheme fails to capture the critical points of interest with enough fidelity or clarity. The paper details how the problem of smoothing out the best (SOB) can remain unsolved in multimodal systems, even if a sequential model updating strategy has been employed, and lead to erroneous outcomes. Alternatively, to overcome the problem of SOB defect, this paper presents the CMLS method that uses a novel clustering-based methodical procedure to screen out distinct potential optimal points for subsequent model validation and updating from a design decision perspective. It is embedded within a genetic algorithm setup to capture the buried, transient, yet inherent data pattern in the design evolution based on the principles of data mining, which are then used to improve the overall performance and effectiveness of surrogate-model-based design optimization. Four illustrative case studies, including a 21bar truss problem, are detailed to demonstrate the application of the CMLS methodology and the results are discussed.


Author(s):  
Zequn Wang ◽  
Pingfeng Wang

This paper presents a maximum confidence enhancement based sequential sampling approach for simulation-based design under uncertainty. In the proposed approach, the ordinary Kriging method is adopted to construct surrogate models for all constraints and thus Monte Carlo simulation (MCS) is able to be used to estimate reliability and its sensitivity with respect to design variables. A cumulative confidence level is defined to quantify the accuracy of reliability estimation using MCS based on the Kriging models. To improve the efficiency of proposed approach, a maximum confidence enhancement based sequential sampling scheme is developed to update the Kriging models based on the maximum improvement of the defined cumulative confidence level, in which a sample that produces the largest improvement of the cumulative confidence level is selected to update the surrogate models. Moreover, a new design sensitivity estimation approach based upon constructed Kriging models is developed to estimate the reliability sensitivity information with respect to design variables without incurring any extra function evaluations. This enables to compute smooth sensitivity values and thus greatly enhances the efficiency and robustness of the design optimization process. Two case studies are used to demonstrate the proposed methodology.


2018 ◽  
Vol 140 (7) ◽  
Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. MacKenzie ◽  
Soobum Lee ◽  
...  

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem in simulation-based design under uncertainty. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit, and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to construct a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration–exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function (LSF)) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through three mathematical examples and a real world problem, the results of which suggest that the method can achieve an accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.


Author(s):  
Natesh Chandrashekar ◽  
Sundar Krishnamurty

This paper deals with the development of simulation-based design models under uncertainty, and presents an approach for building surrogate models and validating them for their efficacy and relevance from a design decision perspective. Specifically, this work addresses the fundamental research issue of how to build such surrogate models that are computationally efficient and sufficiently accurate, and meaningful from the viewpoint of its subsequent use in design. Towards this goal, this work presents a Bayesian analysis based iterative model building and model validation process leading to reliable and accurate surrogate models, which can then be invoked in the final design optimization phase. The resulting surrogate models can be expected to act as abstractions or idealizations of the engineering analysis models and can mimic system performance in a computationally efficient manner to facilitate design decisions under uncertainty. This is accomplished by first building initial models, and then refining and validating them over many stages, in line with the iterative nature of the engineering design process. Salient features of this work include the introduction of a novel preference-based design screening strategy nested in an optimally-selected prior information set for validation purposes; and the use of a Bayesian evaluation based model-updating technique to capture new information and enhance model’s value and effectiveness. A case study of the design of a windshield wiper arm is used to demonstrate the overall methodology and the results are discussed.


Author(s):  
Tiefu Shao ◽  
Sundar Krishnamurty

Variations associated with stenting systems, artery properties, and doctor skills necessitate a better understanding of coronary artery stents so as to facilitate the design of stents that are customized to individual patients. This paper presents the development of an integrated computer simulation-based design approach using engineering finite element analysis (FEA) models for capturing stent knowledge, utility theory-based decision models for representing the design preferences, and statistics-based surrogate models for improving process efficiency. Two focuses of the paper are: 1) understanding the significance of engineering analysis and surrogate models in the simulation-based design of medical devices; 2) investigating the modeling implications in the context of stent design. The study reveals that the advanced nonlinear FEA software with analysis capacities on large deformation and contact interaction has offered a platform to execute high fidelity simulations, yet the selection of appropriate analysis models is still subject to the tradeoff between cost of analysis and accuracy of solution; the cost-prohibitive simulations necessitate the employment of surrogate models in subsequent multi-objective design optimization. A detailed comparison between regression models and Kriging models suggests the importance of sampling schemes in successfully implementing Kriging methods.


Author(s):  
Karim Hamza ◽  
Kazuhiro Saitou

This paper presents a new method for designing vehicle structures for crashworthiness using surrogate models and a genetic algorithm. Inspired by the classifier ensemble approaches in pattern recognition, the method estimates the crash performance of a candidate design based on an ensemble of surrogate models constructed from the different sets of samples of finite element analyses. Multiple sub-populations of candidate designs are evolved, in a co-evolutionary fashion, to minimize the different aggregates of the outputs of the surrogate models in the ensemble, as well as the raw output of each surrogate. With the same sample size of finite element analyses, it is expected the method can provide wider ranges potentially high-performance designs than the conventional methods that employ a single surrogate model, by effectively compensating the errors associated with individual surrogate models. Two case studies on simplified and full vehicle models subject to full-overlap frontal crash conditions are presented for demonstration.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Nikolaos Papanikolaou ◽  
Konstantinos Anyfantis

PurposeExperimental mid/large scale testing of ship-like stiffened panels in compression is a quite expensive exercise that is not standard. Numerical simulations are preferred instead. Because of being relatively inexpensive (cost and time wise), most authors perform an exhaustive design space exploration arriving at a significant number of runs. This work demonstrates that the buckling response with respect to the nondimensional slenderness ratios may well be fitted with nine runs per stiffener geometry.Design/methodology/approachEfficient derivation of buckling strength formulas for stiffened panels through the employment of design of experiments (DoE) and response surface methodology (RSM) combined with numerical nonlinear experimentation over the entire range of practical geometries.FindingsThe surrogate model developed for T-bar stiffeners predicts accurately enough the ultimate stress in the practical design area, while the surrogate models for angle bars and flat bars demonstrate difference between 10 and 30% from common structural rules (CSR).Originality/valueTo the authors' best knowledge, the statistical-based formal and rigorous approach of DoE and RSM to obtaining buckling surfaces for stiffened panels is performed for the first time. The number of required observations per stiffener type has not been addressed yet as each work selects its own sampling scheme without formal reasoning. This work comes to frame the number of observations for efficient surrogate model building.


2005 ◽  
Vol 49 (03) ◽  
pp. 159-175
Author(s):  
Daniele Peri ◽  
Emilio F. Campana

This work presents a simulation-based design environment for the solution of optimum ship design problems based on a global optimization (GO) algorithm that prevents the optimizer from being trapped into local minima. The procedure, illustrated in the framework of multiobjective optimization problems, makes use of high-fidelity, CPU-time-expensive computational models, including a free surface-capturing Reynolds-averaged Navier Stokes equation (RANSE) solver. The optimization process is composed of a global and a local phase. In the global stage of the search, a few computationally expensive simulations are needed for creating analytical approximations(i.e., surrogate models) of the objective functions. Tentative designs, created to explore the design space, are then evaluated with these inexpensive approximations. The more promising designs are then clustered and locally minimized and eventually verified with high-fidelity simulations. New exact values are used to improve the surrogate models, and repeated cycles of the algorithm are performed. A decision maker strategy is finally adopted to select the more interesting solution, and a final local refinement stage is performed by a gradient-based local optimization technique. A key point in the algorithm is the introduction of the surrogate models for the reduction of the overall time needed for the objective functions evaluation and their dynamic evolution and refinement along the optimization process. Moreover, an attractive alternative to adjoint formulations, the approximation management framework (AMF), based on a combined strategy that joins variable fidelity models and trust region techniques, is tested. Numerical examples are given demonstrating both the validity and usefulness of the proposed approach.


Author(s):  
Sandeep Kumar Bothra ◽  
Sunita Singhal ◽  
Hemlata Goyal

Resource scheduling in a cloud computing environment is noteworthy for scientific workflow execution under a cost-effective deadline constraint. Although various researchers have proposed to resolve this critical issue by applying various meta-heuristic and heuristic approaches, no one is able to meet the strict deadline conditions with load-balanced among machines. This article has proposed an improved genetic algorithm that initializes the population with a greedy strategy. Greedy strategy assigns the task to a virtual machine that is under loaded instead of assigning the tasks randomly to a machine. In general workflow scheduling, task dependency is tested after each crossover and mutation operators of genetic algorithm, but here the authors perform after the mutation operation only which yield better results. The proposed model also considered booting time and performance variation of virtual machines. The authors compared the algorithm with previously developed heuristics and metaheuristics both and found it increases hit rate and load balance. It also reduces execution time and cost.


Sign in / Sign up

Export Citation Format

Share Document