Heuristics-Enhanced Model Fusion Considering Incomplete Data Using Kriging Models

2017 ◽  
Vol 140 (2) ◽  
Author(s):  
Anton v. Beek ◽  
Mian Li ◽  
Chao Ren

Simulation models are widely used to describe processes that would otherwise be arduous to analyze. However, many of these models merely provide an estimated response of the real systems, as their input parameters are exposed to uncertainty, or partially excluded from the model due to the complexity, or lack of understanding of the problem's physics. Accordingly, the prediction accuracy can be improved by integrating physical observations into low fidelity models, a process known as model calibration or model fusion. Typical model fusion techniques are essentially concerned with how to allocate information-rich data points to improve the model accuracy. However, methods on subtracting more information from already available data points have been starving attention. Subsequently, in this paper we acknowledge the dependence between the prior estimation of input parameters and the actual input parameters. Accordingly, the proposed framework subtracts the information contained in this relation to update the estimated input parameters and utilizes it in a model updating scheme to accurately approximate the real system outputs that are affected by all real input parameters (RIPs) of the problem. The proposed approach can effectively use limited experimental samples while maintaining prediction accuracy. It basically tweaks model parameters to update the computer simulation model so that it can match a specific set of experimental results. The significance and applicability of the proposed method is illustrated through comparison with a conventional model calibration scheme using two engineering examples.

Author(s):  
Gernoth Götz ◽  
Oldrich Polach

This article presents an evaluation of the model validation method that was provided as the output of the European research project DynoTRAIN and implemented in the recently revised European standard EN 14363. The input parameters of the validation method, namely the section length, number, and selection of sections as well as the selected parameters of the simulation models are varied. The evaluation shows that a single section that provides a large deviation between simulation and measurement can, in rare cases, influence the results of the overall validation. Nevertheless, the investigations demonstrate a good robustness, as the final validation result is very rarely influenced by the variation of sections selected for validation, by the use of a higher number of sections than the minimum of 12, or by longer sections than that specified for on-track tests in accordance with EN 14363. The validation methodology is also able to recognize the errors in vehicle model parameters, if the errors have a relevant influence on the behaviour of the running dynamics of the evaluated vehicle.


Author(s):  
Paul D. Arendt ◽  
Wei Chen ◽  
Daniel W. Apley

The use of complex computer simulations to design, improve, optimize, or simply to better understand complex systems in many fields of science and engineering is now ubiquitous. However, simulation models are never a perfect representation of physical reality. Two general sources of uncertainty that account for the differences between simulations and experiments are parameter uncertainty and model uncertainty. The former derives from unknown model parameters, while the latter is caused by underlying missing physics, numerical approximations, and other inaccuracies of the computer simulation that exist even if all of the parameters are known. To obtain knowledge of these two sources of uncertainty, data from computer simulations (usually abundant) and data from physical experiments (typically more limited) are often combined using statistical methods. Statistical adjustment of the computer simulation model to account for the two sources of uncertainty is referred to as calibration. We argue that calibration as it is typically implemented, using only a single response variable, is challenging in that it is often extremely difficult to distinguish between the effects of parameter and model uncertainty. However, many different responses (distinct responses and/or the same response measured at different spatial and temporal locations) are automatically calculated in simulations. As multiple responses generally share a mutual dependence on the unknown parameters, they provide valuable information that can improve identifiability of parameter and model uncertainty in calibration, if they are also measured experimentally. In this paper, we explore the use of multiple responses for calibration.


1988 ◽  
Vol 15 (1) ◽  
pp. 30-35 ◽  
Author(s):  
G. D. Grosz ◽  
R. L. Elliott ◽  
J. H. Young

Abstract Growth simulation models provide potential benefit in the study of peanut (Arachis hypogaea L.) production. Two physiologically-based peanut simulation models of varying complexity were adapted and calibrated to simulate the growth and yield of Spanish peanut under Oklahoma conditions. Field data, including soil moisture measurements and sequential yield samples, were collected at four sites during the 1985 growing season. An automated weather station provided the necessary climatic data for the models. PNUTMOD, the simpler model originally developed for educational purposes, requires seven varietal input parameters in addition to temperature and solar radiation data. The seven model parameters were calibrated using data from two of the four field sites, and model performance was evaluated using the remaining two data sets. The more complex model, PEANUT, simulates individual plant physiological processes and utilizes a considerably larger set of input parameters. Since PEANUT was developed for the Virginia type peanut, several input parameters required adjustment for the Spanish type peanut grown in Oklahoma. PEANUT was calibrated using data from all four study sites. Both models performed well in simulating pod yield. PNUTMOD, which does not allow for leaf senescence, did not perform as well as PEANUT in predicting vegetative growth.


Author(s):  
Gernoth Götz ◽  
Oldrich Polach

This article presents an evaluation of the model validation method provided as the output of the European research project DynoTRAIN and implemented in the recently revised European standard EN 14363. The input parameters of the validation method are varied, namely the section length, number and selection of sections as well as selected parameters of the simulation models. The evaluation shows that a single section providing a large deviation between simulation and measurement can in rare cases influence the overall validation result. Nevertheless, the investigations demonstrate a good robustness, as the final validation result is very rarely influenced by the variation of sections selected for validation, by the use of a higher number of sections than the minimum of 12 or by longer sections than specified for on-track tests according to EN 14363. The validation methodology is also able to recognise errors in vehicle model parameters, if they have a relevant influence on the running dynamics behaviour of the evaluated vehicle.


Author(s):  
Marko Hofmann

Model calibration is the task of adjusting an already existing model to a reference system. In general, this is done by adjusting model parameters to a set of given samples from the reference system. Model calibration is often regarded to be necessary for complex simulation models in order to create a homomorphic (“structurally equivalent”) abstraction of (a special aspect of) reality. This paper introduces a formal approach to model calibration. Within the frame of this formalism it is shown that the computational complexity of model calibration is NP-complete. The practical implications of these theoretic results are presumably of minor importance for most single models. However, for huge model federations the complexity of parameter calibration could draw a serious line with respect to the validation of the federation and its cost-benefit ratio.


2020 ◽  
Vol 14 (3) ◽  
pp. 7141-7151 ◽  
Author(s):  
R. Omar ◽  
M. N. Abdul Rani ◽  
M. A. Yunus

Efficient and accurate finite element (FE) modelling of bolted joints is essential for increasing confidence in the investigation of structural vibrations. However, modelling of bolted joints for the investigation is often found to be very challenging. This paper proposes an appropriate FE representation of bolted joints for the prediction of the dynamic behaviour of a bolted joint structure. Two different FE models of the bolted joint structure with two different FE element connectors, which are CBEAM and CBUSH, representing the bolted joints are developed. Modal updating is used to correlate the two FE models with the experimental model. The dynamic behaviour of the two FE models is compared with experimental modal analysis to evaluate and determine the most appropriate FE model of the bolted joint structure. The comparison reveals that the CBUSH element connectors based FE model has a greater capability in representing the bolted joints with 86 percent accuracy and greater efficiency in updating the model parameters. The proposed modelling technique will be useful in the modelling of a complex structure with a large number of bolted joints.


2019 ◽  
Vol 147 (5) ◽  
pp. 1429-1445 ◽  
Author(s):  
Yuchu Zhao ◽  
Zhengyu Liu ◽  
Fei Zheng ◽  
Yishuai Jin

Abstract We performed parameter estimation in the Zebiak–Cane model for the real-world scenario using the approach of ensemble Kalman filter (EnKF) data assimilation and the observational data of sea surface temperature and wind stress analyses. With real-world data assimilation in the coupled model, our study shows that model parameters converge toward stable values. Furthermore, the new parameters improve the real-world ENSO prediction skill, with the skill improved most by the parameter of the highest climate sensitivity (gam2), which controls the strength of anomalous upwelling advection term in the SST equation. The improved prediction skill is found to be contributed mainly by the improvement in the model dynamics, and second by the improvement in the initial field. Finally, geographic-dependent parameter optimization further improves the prediction skill across all the regions. Our study suggests that parameter optimization using ensemble data assimilation may provide an effective strategy to improve climate models and their real-world climate predictions in the future.


Water ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1484
Author(s):  
Dagmar Dlouhá ◽  
Viktor Dubovský ◽  
Lukáš Pospíšil

We present an approach for the calibration of simplified evaporation model parameters based on the optimization of parameters against the most complex model for evaporation estimation, i.e., the Penman–Monteith equation. This model computes the evaporation from several input quantities, such as air temperature, wind speed, heat storage, net radiation etc. However, sometimes all these values are not available, therefore we must use simplified models. Our interest in free water surface evaporation is given by the need for ongoing hydric reclamation of the former Ležáky–Most quarry, i.e., the ongoing restoration of the land that has been mined to a natural and economically usable state. For emerging pit lakes, the prediction of evaporation and the level of water plays a crucial role. We examine the methodology on several popular models and standard statistical measures. The presented approach can be applied in a general model calibration process subject to any theoretical or measured evaporation.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Sen Zhang ◽  
Qiang Fu ◽  
Wendong Xiao

Accurate click-through rate (CTR) prediction can not only improve the advertisement company’s reputation and revenue, but also help the advertisers to optimize the advertising performance. There are two main unsolved problems of the CTR prediction: low prediction accuracy due to the imbalanced distribution of the advertising data and the lack of the real-time advertisement bidding implementation. In this paper, we will develop a novel online CTR prediction approach by incorporating the real-time bidding (RTB) advertising by the following strategies: user profile system is constructed from the historical data of the RTB advertising to describe the user features, the historical CTR features, the ID features, and the other numerical features. A novel CTR prediction approach is presented to address the imbalanced learning sample distribution by integrating the Weighted-ELM (WELM) and the Adaboost algorithm. Compared to the commonly used algorithms, the proposed approach can improve the CTR significantly.


2021 ◽  
pp. 1-14
Author(s):  
Zhenggang Wang ◽  
Jin Jin

Remote sensing image segmentation provides technical support for decision making in many areas of environmental resource management. But, the quality of the remote sensing images obtained from different channels can vary considerably, and manually labeling a mass amount of image data is too expensive and Inefficiently. In this paper, we propose a point density force field clustering (PDFC) process. According to the spectral information from different ground objects, remote sensing superpixel points are divided into core and edge data points. The differences in the densities of core data points are used to form the local peak. The center of the initial cluster can be determined by the weighted density and position of the local peak. An iterative nebular clustering process is used to obtain the result, and a proposed new objective function is used to optimize the model parameters automatically to obtain the global optimal clustering solution. The proposed algorithm can cluster the area of different ground objects in remote sensing images automatically, and these categories are then labeled by humans simply.


Sign in / Sign up

Export Citation Format

Share Document