scholarly journals Error Models for the Kinetic Evaluation of Chemical Degradation Data

Environments ◽  
2019 ◽  
Vol 6 (12) ◽  
pp. 124
Author(s):  
Johannes Ranke ◽  
Stefan Meinecke

In the kinetic evaluation of chemical degradation data, degradation models are fitted to the data by varying degradation model parameters to obtain the best possible fit. Today, constant variance of the deviations of the observed data from the model is frequently assumed (error model “constant variance”). Allowing for a different variance for each observed variable (“variance by variable”) has been shown to be a useful refinement. On the other hand, experience gained in analytical chemistry shows that the absolute magnitude of the analytical error often increases with the magnitude of the observed value, which can be explained by an error component which is proportional to the true value. Therefore, kinetic evaluations of chemical degradation data using a two-component error model with a constant component (absolute error) and a component increasing with the observed values (relative error) are newly proposed here as a third possibility. In order to check which of the three error models is most adequate, they have been used in the evaluation of datasets obtained from pesticide evaluation dossiers published by the European Food Safety Authority (EFSA). For quantitative comparisons of the fits, the Akaike information criterion (AIC) was used, as the commonly used error level defined by the FOrum for the Coordination of pesticide fate models and their USe(FOCUS) is based on the assumption of constant variance. A set of fitting routines was developed within the mkin software package that allow for robust fitting of all three error models. Comparisons using parent only degradation datasets, as well as datasets with the formation and decline of transformation products showed that in many cases, the two-component error model proposed here provides the most adequate description of the error structure. While it was confirmed that the variance by variable error model often provides an improved representation of the error structure in kinetic fits with metabolites, it could be shown that in many cases, the two-component error model leads to a further improvement. In addition, it can be applied to parent only fits, potentially improving the accuracy of the fit towards the end of the decline curve, where concentration levels are lower.

2017 ◽  
Author(s):  
Mario R. Hernández-López ◽  
Félix Francés

Abstract. Over the years, the Standard Least Squares (SLS) has been the most commonly adopted criterion for the calibration of hydrological models, despite the fact that they generally do not fulfill the assumptions made by the SLS method: very often errors are autocorrelated, heteroscedastic, biased and/or non-Gaussian. Similarly to recent papers, which suggest more appropriate models for the errors in hydrological modeling, this paper addresses the challenging problem of jointly estimate hydrological and error model parameters (joint inference) in a Bayesian framework, trying to solve some of the problems found in previous related researches. This paper performs a Bayesian joint inference through the application of different inference models, as the known SLS or WLS and the new GL++ and GL++Bias error models. These inferences were carried out on two lumped hydrological models which were forced with daily hydrometeorological data from a basin of the MOPEX project. The main finding of this paper is that a joint inference, to be statistically correct, must take into account the joint probability distribution of the state variable to be predicted and its deviation from the observations (the errors). Consequently, the relationship between the marginal and conditional distributions of this joint distribution must be taken into account in the inference process. This relation is defined by two general statistical expressions called the Total Laws (TLs): the Total Expectation and the Total Variance Laws. Only simple error models, as SLS, do not explicitly need the TLs implementation. An important consequence of the TLs enforcement is the reduction of the degrees of freedom in the inference problem namely, the reduction of the parameter space dimension. This research demonstrates that non-fulfillment of TLs produces incorrect error and hydrological parameter estimates and unreliable predictive distributions. The target of a (joint) inference must be fulfilling the error model hypotheses rather than to achieve the better fitting to the observations. Consequently, for a given hydrological model, the resulting performance of the prediction, the reliability of its predictive uncertainty, as well as the robustness of the parameter estimates, will be exclusively conditioned by the degree in which errors fulfill the error model hypotheses.


2020 ◽  
Vol 14 (3) ◽  
pp. 369-379
Author(s):  
Kanglin Xing ◽  
◽  
J. R. R. Mayer ◽  
Sofiane Achiche

The scale and master ball artefact (SAMBA) method allows estimating the inter- and intra-axis error parameters as well as volumetric errors (VEs) of a five-axis machine tool by using simple ball artefacts and the machine tool’s own touch-trigger probe. The SAMBA method can use two different machine error models named after the number of model parameters, i.e., the “13” and “84” machine error models, to estimate the VEs. In this study, we compare these two machine error models when using VE vector directions and values for monitoring the machine tool condition for three cases of machine malfunctions: 1) a C-axis encoder fault, 2) an induced X-axis linear positioning error, and 3) an induced straightness error simulated fault. The results show that the “13” machine error model produces more focused concentrated VE directions but smaller VE values when compared with the “84” machine error model; furthermore, although both models can recognize the three faults and are effective in monitoring the machine tool condition, the “13” machine error model achieves a better recognition rate of the machine condition. This paper provides guidelines for selecting machine error models for the SAMBA method when using VEs to monitor the machine tool condition.


1996 ◽  
Vol 169 ◽  
pp. 713-714
Author(s):  
S. A. Kutuzov

The interval method of estimating model parameters (MPs) for the Galaxy was suggested earlier (Kutuzov 1988). Intervals are proposed to be used both for observational estimates of galactic parameters (GPs) and for the values of MPs. In this work we consider a model as a tool for studying mutual interaction of GPs. Two-component model is considered (Kutuzov, Ossipkov 1989). We have to estimate the array P of eight MPs.


2017 ◽  
Vol 14 (18) ◽  
pp. 4295-4314 ◽  
Author(s):  
Dan Lu ◽  
Daniel Ricciuto ◽  
Anthony Walker ◽  
Cosmin Safta ◽  
William Munger

Abstract. Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.


2021 ◽  
Vol 62 ◽  
pp. 85-100
Author(s):  
Robert Garafutdinov ◽  

The influence of ARFIMA model parameters on the accuracy of financial time series forecasting on the example of artificially generated long memory series and daily log returns of RTS index is investigated. The investigated parameters are deviation of the integration order value from its «true» value, as well as the memory «length» considered by the model. Based on the research results, some practical recommendations for modeling using ARFIMA have been formulated.


2012 ◽  
Vol 3 (3) ◽  
pp. 35-52
Author(s):  
Steve Saed ◽  
Lingxi Li ◽  
Dongsoo S. Kim

This study proposes and evaluates an average consensus scheme for wireless sensor networks. For this, two communication error models, the fading signal error model and approximated fading signal error model, are introduced and incorporated into the proposed decentralized average consensus scheme, especially adapted to the constraints of wireless sensor networks. A mathematical analysis is introduced to derive the approximated fading signal model from the fading signal model and different simulation scenarios are introduced and their results analyzed to evaluate the performance of the proposed scheme and its effectiveness in meeting the needs of wireless sensor networks.


2014 ◽  
Vol 71 (1) ◽  
Author(s):  
Bello Abdulkadir Rasheed ◽  
Robiah Adnan ◽  
Seyed Ehsan Saffari ◽  
Kafi Dano Pati

In a linear regression model, the ordinary least squares (OLS) method is considered the best method to estimate the regression parameters if the assumptions are met. However, if the data does not satisfy the underlying assumptions, the results will be misleading. The violation for the assumption of constant variance in the least squares regression is caused by the presence of outliers and heteroscedasticity in the data. This assumption of constant variance (homoscedasticity) is very important in linear regression in which the least squares estimators enjoy the property of minimum variance. Therefor e robust regression method is required to handle the problem of outlier in the data. However, this research will use the weighted least square techniques to estimate the parameter of regression coefficients when the assumption of error variance is violated in the data. Estimation of WLS is the same as carrying out the OLS in a transformed variables procedure. The WLS can easily be affected by outliers. To remedy this, We have suggested a strong technique for the estimation of regression parameters in the existence of heteroscedasticity and outliers. Here we apply the robust regression of M-estimation using iterative reweighted least squares (IRWLS) of Huber and Tukey Bisquare function and resistance regression estimator of least trimmed squares to estimating the model parameters of state-wide crime of united states in 1993. The outcomes from the study indicate the estimators obtained from the M-estimation techniques and the least trimmed method are more effective compared with those obtained from the OLS.


2020 ◽  
Vol 9 (1) ◽  
pp. 156-168
Author(s):  
Seyed Mahdi Mousavi ◽  
Saeed Dinarvand ◽  
Mohammad Eftekhari Yazdi

AbstractThe unsteady convective boundary layer flow of a nanofluid along a permeable shrinking/stretching plate under suction and second-order slip effects has been developed. Buongiorno’s two-component nonhomogeneous equilibrium model is implemented to take the effects of Brownian motion and thermophoresis into consideration. It can be emphasized that, our two-phase nanofluid model along with slip concentration at the wall shows better physical aspects relative to taking the constant volume concentration at the wall. The similarity transformation method (STM), allows us to reducing nonlinear governing PDEs to nonlinear dimensionless ODEs, before being solved numerically by employing the Keller-box method (KBM). The graphical results portray the effects of model parameters on boundary layer behavior. Moreover, results validation has been demonstrated as the skin friction and the reduced Nusselt number. We understand shrinking plate case is a key factor affecting non-uniqueness of the solutions and the range of the shrinking parameter for which the solution exists, increases with the first order slip parameter, the absolute value of the second order slip parameter as well as the transpiration rate parameter. Besides, the second-order slip at the interface decreases the rate of heat transfer in a nanofluid. Finally, the analysis for no-slip and first-order slip boundary conditions can also be retrieved as special cases of the present model.


Sign in / Sign up

Export Citation Format

Share Document