Semi-Analytic Probability Density Function for System Uncertainty

Author(s):  
Ahmad Bani Younes ◽  
James Turner

In general, the behavior of science and engineering is predicted based on nonlinear math models. Imprecise knowledge of the model parameters alters the system response from the assumed nominal model data. One proposes an algorithm for generating insights into the range of variability that can be expected due to model uncertainty. An automatic differentiation tool builds the exact partial derivative models required to develop a state transition tensor series (STTS)-based solution for nonlinearly mapping initial uncertainty models into instantaneous uncertainty models. The fully nonlinear statistical system properties are recovered via series approximations. The governing nonlinear probability distribution function is approximated by developing an inverse mapping algorithm for the forward series model. Numerical examples are presented, which demonstrate the effectiveness of the proposed methodology.

Author(s):  
Ahmad Bani Younes ◽  
James Turner

In general, the behavior of science and engineering is predicted based on nonlinear math models. Imprecise knowledge of the model parameters alters the system response from the assumed nominal model data. We propose an algorithm for generating insights into the range of variability that can be the expected due to model uncertainty. An Automatic differentiation tool builds exact partial derivative models to develop State Transition Tensor Series-based (STTS) solution for mapping initial uncertainty models into instantaneous uncertainty models. Development of nonlinear transformations for mapping an initial probability distribution function into a current probability distribution function for computing fully nonlinear statistical system properties. This also demands the inverse mapping of the series. The resulting nonlinear probability distribution function (pdf) represents a Liouiville approximation for the stochastic Fokker Planck equation. Numerical examples are presented that demonstrate the effectiveness of the proposed methodology.


Author(s):  
O. P. Tomchina ◽  
D. N. Polyakhov ◽  
O. I. Tokareva ◽  
A. L. Fradkov

Introduction: The motion of many real world systems is described by essentially non-linear and non-stationary models. A number of approaches to the control of such plants are based on constructing an internal model of non-stationarity. However, the non-stationarity model parameters can vary widely, leading to more errors. It is only assumed in this paper that the change rate of the object parameters is limited, while the initial uncertainty can be quite large.Purpose: Analysis of adaptive control algorithms for non-linear and time-varying systems with an explicit reference model, synthesized by the speed gradient method.Results: An estimate was obtained for the maximum deviation of a closed-loop system solution from the reference model solution. It is shown that with sufficiently slow changes in the parameters and a small initial uncertainty, the limit error in the system can be made arbitrarily small. Systems designed by the direct approach and systems based on the identification approach are both considered. The procedures for the synthesis of an adaptive regulator and analysis of the synthesized system are illustrated by an example.Practical relevance: The obtained results allow us to build and analyze a broad class of adaptive systems with reference models under non-stationary conditions.


Author(s):  
Suryanarayana R. Pakalapati ◽  
Hayri Sezer ◽  
Ismail B. Celik

Dual number arithmetic is a well-known strategy for automatic differentiation of computer codes which gives exact derivatives, to the machine accuracy, of the computed quantities with respect to any of the involved variables. A common application of this concept in Computational Fluid Dynamics, or numerical modeling in general, is to assess the sensitivity of mathematical models to the model parameters. However, dual number arithmetic, in theory, finds the derivatives of the actual mathematical expressions evaluated by the computer code. Thus the sensitivity to a model parameter found by dual number automatic differentiation is essentially that of the combination of the actual mathematical equations, the numerical scheme and the grid used to solve the equations not just that of the model equations alone as implied by some studies. This aspect of the sensitivity analysis of numerical simulations using dual number auto derivation is explored in the current study. A simple one-dimensional advection diffusion equation is discretized using different schemes of finite volume method and the resulting systems of equations are solved numerically. Derivatives of the numerical solutions with respect to parameters are evaluated automatically using dual number automatic differentiation. In addition the derivatives are also estimated using finite differencing for comparison. The analytical solution was also found for the original PDE and derivatives of this solution are also computed analytically. It is shown that a mathematical model could potentially show different sensitivity to a model parameter depending on the numerical method employed to solve the equations and the grid resolution used. This distinction is important since such inter-dependence needs to be carefully addressed to avoid confusion when reporting the sensitivity of predictions to a model parameter using a computer code. A systematic assessment of numerical uncertainty in the sensitivities computed using automatic differentiation is presented.


2008 ◽  
Vol 15 (1) ◽  
pp. 221-232 ◽  
Author(s):  
A. J. Cannon ◽  
W. W. Hsieh

Abstract. Robust variants of nonlinear canonical correlation analysis (NLCCA) are introduced to improve performance on datasets with low signal-to-noise ratios, for example those encountered when making seasonal climate forecasts. The neural network model architecture of standard NLCCA is kept intact, but the cost functions used to set the model parameters are replaced with more robust variants. The Pearson product-moment correlation in the double-barreled network is replaced by the biweight midcorrelation, and the mean squared error (mse) in the inverse mapping networks can be replaced by the mean absolute error (mae). Robust variants of NLCCA are demonstrated on a synthetic dataset and are used to forecast sea surface temperatures in the tropical Pacific Ocean based on the sea level pressure field. Results suggest that adoption of the biweight midcorrelation can lead to improved performance, especially when a strong, common event exists in both predictor/predictand datasets. Replacing the mse by the mae leads to improved performance on the synthetic dataset, but not on the climate dataset except at the longest lead time, which suggests that the appropriate cost function for the inverse mapping networks is more problem dependent.


2013 ◽  
Vol 10 (3) ◽  
pp. 2835-2878
Author(s):  
A. Hartmann ◽  
M. Weiler ◽  
T. Wagener ◽  
J. Lange ◽  
M. Kralik ◽  
...  

Abstract. More than 30% of Europe's land surface is made up of karst exposures. In some countries, water from karst aquifers constitutes almost half of the drinking water supply. Hydrological simulation models can predict the large-scale impact of future environmental change on hydrological variables. However, the information needed to obtain model parameters is not available everywhere and regionalisation methods have to be applied. The responsive behaviour of hydrological systems can be quantified by individual metrics, so-called system signatures. This study explores their value for distinguishing the dominant processes and properties of five different karst systems in Europe and the Middle East with the overall aim of regionalising system signatures and model parameters to ungauged karst areas. By defining ten system signatures derived from hydrodynamic and hydrochemical observations, a process-based karst model is applied to the five karst systems. In a stepwise model evaluation strategy, optimum parameters and their sensitivity are identified using automatic calibration and global variance-based sensitivity analysis. System signatures and sensitive parameters serve as proxies for dominant processes and optimised parameters are used to determine system properties. To test the transferability of the signatures, they are compared with the optimised model parameters and simple climatic and topographic descriptors of the five karst systems. By sensitivity analysis, the set of system signatures was able to distinguish the karst systems from one another by providing separate information about dominant soil, epikarst, and fast and slow groundwater flow processes. Comparing sensitive parameters to the system signatures revealed that annual discharge can serve as a proxy for the recharge area, that the slopes of the high flow parts of the flow duration curves correlate with the fast flow storage constant, and that the dampening of the isotopic signal of the rain as well as the medium flow parts of the flow duration curves have a non-linear relation to the distribution of groundwater dynamics. Even though, only weak correlations between system signatures and climatic and topographic factors could be found, our approach enabled us to identify dominant processes of the different systems and to provide directions for future large-scale simulation of karst areas to predict the impact of future change on karst water resources.


F1000Research ◽  
2015 ◽  
Vol 4 ◽  
pp. 1030 ◽  
Author(s):  
Thomas Cokelaer ◽  
Mukesh Bansal ◽  
Christopher Bare ◽  
Erhan Bilal ◽  
Brian M. Bot ◽  
...  

DREAM challenges are community competitions designed to advance computational methods and address fundamental questions in system biology and translational medicine. Each challenge asks participants to develop and apply computational methods to either predict unobserved outcomes or to identify unknown model parameters given a set of training data. Computational methods are evaluated using an automated scoring metric, scores are posted to a public leaderboard, and methods are published to facilitate community discussions on how to build improved methods. By engaging participants from a wide range of science and engineering backgrounds, DREAM challenges can comparatively evaluate a wide range of statistical, machine learning, and biophysical methods. Here, we describe DREAMTools, a Python package for evaluating DREAM challenge scoring metrics. DREAMTools provides a command line interface that enables researchers to test new methods on past challenges, as well as a framework for scoring new challenges. As of September 2015, DREAMTools includes more than 80% of completed DREAM challenges. DREAMTools complements the data, metadata, and software tools available at the DREAM website http://dreamchallenges.org and on the Synapse platform https://www.synapse.org.Availability: DREAMTools is a Python package. Releases and documentation are available at http://pypi.python.org/pypi/dreamtools. The source code is available at http://github.com/dreamtools.


F1000Research ◽  
2016 ◽  
Vol 4 ◽  
pp. 1030 ◽  
Author(s):  
Thomas Cokelaer ◽  
Mukesh Bansal ◽  
Christopher Bare ◽  
Erhan Bilal ◽  
Brian M. Bot ◽  
...  

DREAM challenges are community competitions designed to advance computational methods and address fundamental questions in system biology and translational medicine. Each challenge asks participants to develop and apply computational methods to either predict unobserved outcomes or to identify unknown model parameters given a set of training data. Computational methods are evaluated using an automated scoring metric, scores are posted to a public leaderboard, and methods are published to facilitate community discussions on how to build improved methods. By engaging participants from a wide range of science and engineering backgrounds, DREAM challenges can comparatively evaluate a wide range of statistical, machine learning, and biophysical methods. Here, we describe DREAMTools, a Python package for evaluating DREAM challenge scoring metrics. DREAMTools provides a command line interface that enables researchers to test new methods on past challenges, as well as a framework for scoring new challenges. As of March 2016, DREAMTools includes more than 80% of completed DREAM challenges. DREAMTools complements the data, metadata, and software tools available at the DREAM website http://dreamchallenges.org and on the Synapse platform at https://www.synapse.org.Availability: DREAMTools is a Python package. Releases and documentation are available at http://pypi.python.org/pypi/dreamtools. The source code is available at http://github.com/dreamtools/dreamtools.


1996 ◽  
Vol 80 (5) ◽  
pp. 1819-1828 ◽  
Author(s):  
M. E. Cabrera ◽  
H. J. Chizeck

The relationship between blood lactate concentration ([La]) and O2 uptake (VO2) during incremental exercise remains controversial: does [La] increase smoothly as a function of VO2 (continuous model), or does it begin to increase abruptly above a particular metabolic rate (threshold model)? The dynamic characteristics of the underlying physiological system are investigated using system identification analysis techniques. A multivariate deterministic time series model of the [La] and VO2 response to incremental changes in work rate was fitted to simulated and experimental data. Time-varying system response parameters were determined through the application of a weighted recursive least squares algorithm. The model, using the identified time-varying parameters, provided a good fit to the data. The variation of these parameters over time was then examined. Two major transitions in the parameters were found to occur at intensity levels equivalent to 53 +/- 8% and 77 +/- 9% maximal VO2 (experimental data). These changes in the model parameters indicate that the best linear dynamic model that fits the observed system behavior has changed. This implies that the system has changed its operation in some way, by altering its structure or by moving to a different operating region. The identified parameter changes over time suggest that the exercise intensity range (from rest to maximal VO2) is divided into three main intensity domains, each with distinct dynamics. Further study of this three-phase system may help in the understanding of the underlying physiological mechanisms that affect the dynamics of [La] and VO2 during exercise.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 823
Author(s):  
Muhammad Zafar Iqbal ◽  
Muhammad Zeshan Arshad ◽  
Gamze Özel ◽  
Oluwafemi Samson Balogun

Background: Modeling with the complex random phenomena that are frequently observed in reliability engineering, hydrology, ecology, medical science, and agricultural sciences was once thought to be an enigma. Scientists and practitioners agree that an appropriate but simple model is the best choice for this investigation. To address these issues, scientists have previously discussed a variety of bounded and unbounded, simple to complex lifetime models. Methods: We discussed a modified Lehmann type II (ML-II) model as a better approach to modeling bathtub-shaped and asymmetric random phenomena. A number of complementary mathematical and reliability measures were developed and discussed. Furthermore, explicit expressions for the moments, quantile function, and order statistics were developed. Then, we discussed the various shapes of the density and reliability functions over various model parameter choices. The maximum likelihood estimation (MLE) method was used to estimate the unknown model parameters, and a simulation study was carried out to evaluate the MLEs' asymptotic behavior. Results: We demonstrated ML- II's dominance over well-known competitors by modeling anxiety in women and electronic data.


2021 ◽  
Author(s):  
Adam Ciesielski ◽  
Thomas Forbriger

<p>Harmonic tidal analysis bases on the presumption that since short records and close frequencies result in an ill-conditioned matrix equation, a record of length T is required to distinguish harmonics with a frequency separation of 1/T (Rayleigh criterion). To achieve stability of the solution, tidal harmonics are grouped. Nevertheless, if any additional information from different harmonics within the assumed groups is present in the data, it cannot be resolved. While the most information in each group is carried by the harmonic with the largest amplitude, time series from other harmonics is properly taken into account in estimated amplitudes and phases. However, if the signal from the next largest harmonic in a group is significantly different from the expectation, the grouping parametrization might lead to an inaccurate estimate of tidal parameters. That might be an issue since harmonics in a group do not have the same admittance factor, or if the assumed relationship between harmonics degree 2 and 3 is false.</p><p>The bias caused by grouping tidal harmonics can be investigated with methods used for stabilizing inverse problem solutions. In our study, we abandon the concept of groups. The resulting ill-posedness of the problem is reduced by constraining the model parameters (1) to reference values and (2) to the condition that admittance shall be a smooth function of frequency. The mentioned regularization terms are present in the least-squares objective function, and the trade-off parameter between the model misfit and data residuals is chosen by the L-curve criterion. We demonstrate how this method may be used to reveal system properties hidden by wave grouping in tidal analysis. We also suggest that forcing time series amplitude may be more relevant grouping criterion than solely frequency closeness of harmonics.</p>


Sign in / Sign up

Export Citation Format

Share Document