Analysis of approaches to building a probability density function for the mathematical model parameters of rat atrial cardiomyocytes

2019 ◽  
Author(s):  
D. V. Shmarko ◽  
T. M. Nesterova ◽  
K. S. Ushenin
2019 ◽  
Vol 67 (4) ◽  
pp. 283-303
Author(s):  
Chettapong Janya-anurak ◽  
Thomas Bernard ◽  
Jürgen Beyerer

Abstract Many industrial and environmental processes are characterized as complex spatio-temporal systems. Such systems known as distributed parameter systems (DPSs) are usually highly complex and it is difficult to establish the relation between model inputs, model outputs and model parameters. Moreover, the solutions of physics-based models commonly differ somehow from the measurements. In this work, appropriate Uncertainty Quantification (UQ) approaches are selected and combined systematically to analyze and identify systems. However, there are two main challenges when applying the UQ approaches to nonlinear distributed parameter systems. These are: (1) how uncertainties are modeled and (2) the computational effort, as the conventional methods require numerous evaluations of the model to compute the probability density function of the response. This paper presents a framework to solve these two issues. Within the Bayesian framework, incomplete knowledge about the system is considered as uncertainty of the system. The uncertainties are represented by random variables, whose probability density function can be achieved by converting the knowledge of the parameters using the Principle of Maximum Entropy. The generalized Polynomial Chaos (gPC) expansion is employed to reduce the computational effort. The framework using gPC based on Bayesian UQ proposed in this work is capable of analyzing systems systematically and reducing the disagreement between model predictions and measurements of the real processes to fulfill user defined performance criteria. The efficiency of the framework is assessed by applying it to a benchmark model (neutron diffusion equation) and to a model of a complex rheological forming process. These applications illustrate that the framework is capable of systematically analyzing the system and optimally calibrating the model parameters.


Mathematics ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 548
Author(s):  
Yuri S. Popkov

The problem of randomized maximum entropy estimation for the probability density function of random model parameters with real data and measurement noises was formulated. This estimation procedure maximizes an information entropy functional on a set of integral equalities depending on the real data set. The technique of the Gâteaux derivatives is developed to solve this problem in analytical form. The probability density function estimates depend on Lagrange multipliers, which are obtained by balancing the model’s output with real data. A global theorem for the implicit dependence of these Lagrange multipliers on the data sample’s length is established using the rotation of homotopic vector fields. A theorem for the asymptotic efficiency of randomized maximum entropy estimate in terms of stationary Lagrange multipliers is formulated and proved. The proposed method is illustrated on the problem of forecasting of the evolution of the thermokarst lake area in Western Siberia.


2021 ◽  
pp. 107754632110201
Author(s):  
Mohammad Ali Heravi ◽  
Seyed Mehdi Tavakkoli ◽  
Alireza Entezami

In this article, the autoregressive time series analysis is used to extract reliable features from vibration measurements of civil structures for damage diagnosis. To guarantee the adequacy and applicability of the time series model, Leybourne–McCabe hypothesis test is used. Subsequently, the probability density functions of the autoregressive model parameters and residuals are obtained with the aid of a kernel density estimator. The probability density function sets are considered as damage-sensitive features of the structure and fast distance correlation method is used to make decision for detecting damages in the structure. Experimental data of a well-known three-story laboratory frame and a large-scale bridge benchmark structure are used to verify the efficiency and accuracy of the proposed method. Results indicate the capability of the method to identify the location and severity of damages, even under the simulated operational and environmental variability.


Author(s):  
Mohsen Abou-Ellail ◽  
Ryo S. Amano ◽  
Samer Elhaw ◽  
Karam Beshay ◽  
Hatem Kayed

The present paper describes a mathematical model for turbulent methane-air jet diffusion flames. The mathematical model solves density-weighted governing equations for momentum, mass continuity, turbulent kinetic energy and its dissipation rate. The combustion model solves density-weighted transport equations for the mixture fraction “f”, its variance “g” and its skewness “s”. These variables are used to compute one part of the probability density function (PDF) in mixture fraction domain. The second part of the PDF is computed from the numerical solutions of the mixture fraction dissipation rate “χ” and its variance χ˜″2. The resulting two-dimensional PDF is defined in the mixture-fraction-scalar-dissipation-rate 2D space. The flamelet combustion sub-model is used to compute the mean flame temperature, density and species mass fractions. The flamelet model provides instantaneous state relationships for the stretched flamelets up to the extinction limit. The mean flame properties are computed through the integration of the stretched flamelet state relationships over the two-dimensional PDF. The present 2D probability density function model can predict rim-attached flames as well as unstable lifted flames. This is because the flamelet model provides information on the flame instability arising from the stretching effects of highspeed flowing gases. The new two-dimensional probability density function is used to predict the flame properties of a free jet methane-air flame for which experimental data exists.


Author(s):  
Ezequiel López-Rubio ◽  
Juan Miguel Ortiz-de-Lazcano-Lobato ◽  
Domingo López-Rodríguez ◽  
Enrique Mérida-Casermeiro ◽  
María del Carmen Vargas-González

Author(s):  
E.V. Danko ◽  
Ye.K. Yergaliyev ◽  
M.N. Madiyarov

The paper describes the implementation of investment projects under conditions of uncertainty. In the developed mathematical model, the effectiveness of an investment project is evaluated by the NPV index. This index is considered a random variable that can be estimated by an investor to within a segment [NPV1;NPV2]. The main difficulties of the decision-making process arise when the segment [NPV1;NPV2] includes zero value. In the developed model, we use the probability density function of NPV value in the form of Pearson curves of the first type. This paper discusses in detail some particular moments, which are to be taken into consideration while choosing a specific type of probability density function of NPV. The main element of the proposed model is the subjective utility function. Many questions regarding the usage of this function in practice are also extensively reviewed in this article. The main requirement for successful usage of the subjective utility function in real life is a well-calculated chapter of scenario analysis of an investment project. This chapter is present in almost all modern business plans. The practical application of the developed mathematical model improves the quality of decisions concerning the investment of funds into projects.


Sign in / Sign up

Export Citation Format

Share Document