Optimization on Metamodeling-Supported Iterative Decomposition

2015 ◽  
Vol 138 (2) ◽  
Author(s):  
Kambiz Haji Hajikolaei ◽  
George H. Cheng ◽  
G. Gary Wang

The recently developed metamodel-based decomposition strategy relies on quantifying the variable correlations of black-box functions so that high-dimensional problems are decomposed to smaller subproblems, before performing optimization. Such a two-step method may miss the global optimum due to its rigidity or requires extra expensive sample points for ensuring adequate decomposition. This work develops a strategy to iteratively decompose high-dimensional problems within the optimization process. The sample points used during the optimization are reused to build a metamodel called principal component analysis-high dimensional model representation (PCA-HDMR) for quantifying the intensities of variable correlations by sensitivity analysis. At every iteration, the predicted intensities of the correlations are updated based on all the evaluated points, and a new decomposition scheme is suggested by omitting the weak correlations. Optimization is performed on the iteratively updated subproblems from decomposition. The proposed strategy is applied for optimization of different benchmarks and engineering problems, and results are compared to direct optimization of the undecomposed problems using trust region mode pursuing sampling method (TRMPS), genetic algorithm (GA), cooperative coevolutionary algorithm with correlation-based adaptive variable partitioning (CCEA-AVP), and divide rectangles (DIRECT). The results show that except for the category of undecomposable problems with all or many strong (i.e., important) correlations, the proposed strategy effectively improves the accuracy of the optimization results. The advantages of the new strategy in comparison with the previous methods are also discussed.

Author(s):  
Kambiz Haji Hajikolaei ◽  
George Cheng ◽  
Gary Wang

The recently developed metamodel-based decomposition strategy relies on quantifying the variable correlations of black-box functions so that high dimensional problems are decomposed to smaller sub-problems, before performing optimization. Such a two-step method may miss the global optimum due to its rigidity or requires extra expensive sample points for ensuring adequate decomposition. This work develops a strategy to iteratively decompose high dimensional problems within the optimization process. The sample points used during the optimization are reused to build a metamodel called PCA-HDMR for quantifying the intensities of variable correlations by sensitivity analysis. At every iteration, the predicted intensities of the correlations are updated based on all the evaluated points and a new decomposition scheme is suggested by omitting the weak correlations. Optimization is performed on the iteratively updated sub-problems from decomposition. The proposed strategy is applied for optimization of different benchmark and engineering problems and results are compared to direct optimization of the undecomposed problems using Trust Region Mode Pursuing Sampling method (TRMPS), Genetic Algorithm (GA), and Dividing RECTangles (DIRECT). The results show that except for the category of un-decomposable problems with all or lots of strong (i. e., important) correlations, the proposed strategy effectively improves the accuracy of the optimization results. The advantages of the new strategy in comparison with the previous methods are also discussed.


Author(s):  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

In engineering design, spending excessive amount of time on physical experiments or expensive simulations makes the design costly and lengthy. This issue exacerbates when the design problem has a large number of inputs, or of high dimension. High Dimensional Model Representation (HDMR) is one powerful method in approximating high dimensional, expensive, black-box (HEB) problems. One existing HDMR implementation, Random Sampling HDMR (RS-HDMR), can build a HDMR model from random sample points with a linear combination of basis functions. The most critical issue in RS-HDMR is that calculating the coefficients for the basis functions includes integrals that are approximated by Monte Carlo summations, which are error prone with limited samples and especially with non-uniform sampling. In this paper, a new approach based on Principal Component Analysis (PCA), called PCA-HDMR, is proposed for finding the coefficients that provide the best linear combination of the bases with minimum error and without using any integral. Benchmark problems are modeled using the method and the results are compared with RS-HDMR results. With both uniform and non-uniform sampling, PCA-HDMR built more accurate models than RS-HDMR for a given set of sample points.


Energies ◽  
2020 ◽  
Vol 13 (14) ◽  
pp. 3520 ◽  
Author(s):  
Hang Li ◽  
Zhe Zhang ◽  
Xianggen Yin

Because the penetration level of renewable energy sources has increased rapidly in recent years, uncertainty in power system operation is gradually increasing. As an efficient tool for power system analysis under uncertainty, probabilistic power flow (PPF) is becoming increasingly important. The point-estimate method (PEM) is a well-known PPF algorithm. However, two significant defects limit the practical use of this method. One is that the PEM struggles to estimate high-order moments accurately; this defect makes it difficult for the PEM to describe the distribution of non-Gaussian output random variables (ORVs). The other is that the calculation burden is strongly related to the scale of input random variables (IRVs), which makes the PEM difficult to use in large-scale power systems. A novel approach based on principal component analysis (PCA) and high-dimensional model representation (HDMR) is proposed here to overcome the defects of the traditional PEM. PCA is applied to decrease the dimension scale of IRVs and eliminate correlations. HDMR is applied to estimate the moments of ORVs. Because HDMR considers the cooperative effects of IRVs, it has a significantly smaller estimation error for high-order moments in particular. Case studies show that the proposed method can achieve a better performance in terms of accuracy and efficiency than traditional PEM.


2013 ◽  
Vol 136 (1) ◽  
Author(s):  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

In engineering design, spending excessive amount of time on physical experiments or expensive simulations makes the design costly and lengthy. This issue exacerbates when the design problem has a large number of inputs, or of high dimension. High dimensional model representation (HDMR) is one powerful method in approximating high dimensional, expensive, black-box (HEB) problems. One existing HDMR implementation, random sampling HDMR (RS-HDMR), can build an HDMR model from random sample points with a linear combination of basis functions. The most critical issue in RS-HDMR is that calculating the coefficients for the basis functions includes integrals that are approximated by Monte Carlo summations, which are error prone with limited samples and especially with nonuniform sampling. In this paper, a new approach based on principal component analysis (PCA), called PCA-HDMR, is proposed for finding the coefficients that provide the best linear combination of the bases with minimum error and without using any integral. Several benchmark problems of different dimensionalities and one engineering problem are modeled using the method and the results are compared with RS-HDMR results. In all problems with both uniform and nonuniform sampling, PCA-HDMR built more accurate models than RS-HDMR for a given set of sample points.


Author(s):  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

High Dimensional Model Representation (HDMR) is a tool for generating an approximation of an input-output model for a multivariate function. It can be used to model a black-box function for metamodel-based optimization. Recently the authors’ team has developed a radial basis function based HDMR (RBF-HDMR) model that can efficiently model a high dimensional black-box function and, moreover, to uncover inner variable structures of the black-box function. This approach, however, requests a complete new, although optimized, set of sample points, as dictated by the methodology, while in engineering design practice one often has many existing sample data. How to utilize the existing data to efficiently construct a HDMR model is the focus of this paper. We first identify the Random-Sampling HDMR (RS-HDMR), which uses orthonormal basis functions as HDMR component functions and existing sample points can be used to calculate the coefficients of the basis functions. One of the important issues related to the RS-HDMR is that in theory the basis functions are obtained based on the continuous integrations related to the orthonormality conditions. In practice, however, the integrations are approximated by Monte Carlo summation and thus the basis functions may not satisfy the orthonormality conditions. In this paper, we propose new and adaptive orthonormal basis functions with respect to a given set of sample points for RS-HDMR approximation. RS-HDMR models are built for different test functions using the standard and new adaptive basis functions for different number of sample points. The relative errors for both models are calculated and compared. The results show that the models that are built using the new basis functions are more accurate.


Sign in / Sign up

Export Citation Format

Share Document