scholarly journals Sensitivity Studies and Parameters Identification for Noisy 3D Moving AWJM Model

2016 ◽  
Vol 2016 ◽  
pp. 1-15
Author(s):  
Didier Auroux ◽  
Vladimir Groza

This work focuses on the identification of optimal model parameters related to Abrasive Waterjet Milling (AWJM) process. The evenly movement as well as variations of the jet feed speed was taken into account and studied in terms of 3D time dependent AWJM model. This gives us the opportunity to predict the shape of the milled trench surfaces. The required trench profile could be obtained with high precision in lack of knowledge about the model parameters and based only on the experimental measurements. We use the adjoint approach to identify the AWJM model parameters. The complexity of inverse problem paired with significant amount of unknowns makes it reasonable to use automatic differentiation software to obtain the adjoint statement. The interest in investigating this problem is caused by needs of industrial milling applications to predict the behavior of the process. This study proposes the possibility of identifying the AWJM model parameters with sufficiently high accuracy and predicting the shapes formation relying on self-generated data or on experimental measurements for both evenly jets movement and arbitrary changes of feed speed. We provide the results acceptable in the production and estimate the suitable parameters taking into account different types of model and measurement errors.

Cells ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 1516
Author(s):  
Daniel Gratz ◽  
Alexander J Winkle ◽  
Seth H Weinberg ◽  
Thomas J Hund

The voltage-gated Na+ channel Nav1.5 is critical for normal cardiac myocyte excitability. Mathematical models have been widely used to study Nav1.5 function and link to a range of cardiac arrhythmias. There is growing appreciation for the importance of incorporating physiological heterogeneity observed even in a healthy population into mathematical models of the cardiac action potential. Here, we apply methods from Bayesian statistics to capture the variability in experimental measurements on human atrial Nav1.5 across experimental protocols and labs. This variability was used to define a physiological distribution for model parameters in a novel model formulation of Nav1.5, which was then incorporated into an existing human atrial action potential model. Model validation was performed by comparing the simulated distribution of action potential upstroke velocity measurements to experimental measurements from several different sources. Going forward, we hope to apply this approach to other major atrial ion channels to create a comprehensive model of the human atrial AP. We anticipate that such a model will be useful for understanding excitability at the population level, including variable drug response and penetrance of variants linked to inherited cardiac arrhythmia syndromes.


Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.


Author(s):  
Mohammad-Reza Ashory ◽  
Farhad Talebi ◽  
Heydar R Ghadikolaei ◽  
Morad Karimpour

This study investigated the vibrational behaviour of a rotating two-blade propeller at different rotational speeds by using self-tracking laser Doppler vibrometry. Given that a self-tracking method necessitates the accurate adjustment of test setups to reduce measurement errors, a test table with sufficient rigidity was designed and built to enable the adjustment and repair of test components. The results of the self-tracking test on the rotating propeller indicated an increase in natural frequency and a decrease in the amplitude of normalized mode shapes as rotational speed increases. To assess the test results, a numerical model created in ABAQUS was used. The model parameters were tuned in such a way that the natural frequency and associated mode shapes were in good agreement with those derived using a hammer test on a stationary propeller. The mode shapes obtained from the hammer test and the numerical (ABAQUS) modelling were compared using the modal assurance criterion. The examination indicated a strong resemblance between the hammer test results and the numerical findings. Hence, the model can be employed to determine the other mechanical properties of two-blade propellers in test scenarios.


Kerntechnik ◽  
2021 ◽  
Vol 86 (2) ◽  
pp. 152-163
Author(s):  
T.-C. Wang ◽  
M. Lee

Abstract In the present study, a methodology is developed to quantify the uncertainties of special model parameters of the integral severe accident analysis code MAAP5. Here, the in-vessel hydrogen production during a core melt accident for Lungmen Nuclear Power Station of Taiwan Power Company, an advanced boiling water reactor, is analyzed. Sensitivity studies are performed to identify those parameters with an impact on the output parameter. For this, multiple calculations of MAAP5 are performed with input combinations generated from Latin Hypercube Sampling (LHS). The results are analyzed to determine the 95th percentile with 95% confidence level value of the amount of in-vessel hydrogen production. The calculations show that the default model options for IOXIDE and FGBYPA are recommended. The Pearson Correlation Coefficient (PCC) was used to determine the impact of model parameters on the target output parameters and showed that the three parameters TCLMAX, FCO, FOXBJ are highly influencing the in-vessel hydrogen generation. Suggestions of values of these three parameters are given.


Energies ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 1948 ◽  
Author(s):  
Fu-Cheng Wang ◽  
Yi-Shao Hsiao ◽  
Yi-Zhe Yang

This paper discusses the optimization of hybrid power systems, which consist of solar cells, wind turbines, fuel cells, hydrogen electrolysis, chemical hydrogen generation, and batteries. Because hybrid power systems have multiple energy sources and utilize different types of storage, we first developed a general hybrid power model using the Matlab/SimPowerSystemTM, and then tuned model parameters based on the experimental results. This model was subsequently applied to predict the responses of four different hybrid power systems for three typical loads, without conducting individual experiments. Furthermore, cost and reliability indexes were defined to evaluate system performance and to derive optimal system layouts. Finally, the impacts of hydrogen costs on system optimization was discussed. In the future, the developed method could be applied to design customized hybrid power systems.


Author(s):  
Suryanarayana R. Pakalapati ◽  
Hayri Sezer ◽  
Ismail B. Celik

Dual number arithmetic is a well-known strategy for automatic differentiation of computer codes which gives exact derivatives, to the machine accuracy, of the computed quantities with respect to any of the involved variables. A common application of this concept in Computational Fluid Dynamics, or numerical modeling in general, is to assess the sensitivity of mathematical models to the model parameters. However, dual number arithmetic, in theory, finds the derivatives of the actual mathematical expressions evaluated by the computer code. Thus the sensitivity to a model parameter found by dual number automatic differentiation is essentially that of the combination of the actual mathematical equations, the numerical scheme and the grid used to solve the equations not just that of the model equations alone as implied by some studies. This aspect of the sensitivity analysis of numerical simulations using dual number auto derivation is explored in the current study. A simple one-dimensional advection diffusion equation is discretized using different schemes of finite volume method and the resulting systems of equations are solved numerically. Derivatives of the numerical solutions with respect to parameters are evaluated automatically using dual number automatic differentiation. In addition the derivatives are also estimated using finite differencing for comparison. The analytical solution was also found for the original PDE and derivatives of this solution are also computed analytically. It is shown that a mathematical model could potentially show different sensitivity to a model parameter depending on the numerical method employed to solve the equations and the grid resolution used. This distinction is important since such inter-dependence needs to be carefully addressed to avoid confusion when reporting the sensitivity of predictions to a model parameter using a computer code. A systematic assessment of numerical uncertainty in the sensitivities computed using automatic differentiation is presented.


Membranes ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 66
Author(s):  
Gerardo León ◽  
Elisa Gómez ◽  
Beatriz Miguel ◽  
Asunción María Hidalgo ◽  
María Gómez ◽  
...  

Emulsion liquid membranes have been successfully used for the removal of different types of organic and inorganic pollutants by means of carrier-mediated transport mechanisms. However, the models that describe the kinetics and transport of such mechanisms are very complex due to the high number of model parameters. Starting from an analysis of the similarity between the elemental mechanisms of carrier-mediated transport in liquid membranes and of transport in adsorption processes, this paper presents an experimental analysis of the possibility of applying kinetic and mechanistic models developed for adsorption to carrier-mediated transport in emulsion liquid membranes. We study the removal of a target species, in this case, Cu(II), by emulsion liquid membranes containing membrane phase solutions of benzoylacetone (carrier agent), Span 80 (emulsifying agent) and kerosene (diluent), and hydrochloric acid as a stripping agent in the product phase. The experimental results fit the pseudo-second-order adsorption kinetic model, showing good relationships between the experimental and model parameters. Although both Cu(II) diffusion through the feed/membrane interface boundary layer and complex Cu-benzoylacetone diffusion through the membrane phase controls Cu(II) transport, it is the former step that mainly controls the transport process.


2008 ◽  
Vol 5 (3) ◽  
pp. 1641-1675 ◽  
Author(s):  
A. Bárdossy ◽  
S. K. Singh

Abstract. The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives an unique and very best parameter vector. The parameters of hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on the half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study) for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.


InterConf ◽  
2021 ◽  
pp. 333-346
Author(s):  
Andriy Аrtikula ◽  
Dmytro Britov ◽  
Volodymyr Dzhus ◽  
Borys Haibadulov ◽  
Anastasiia Haibadulova ◽  
...  

Modern wide development of science and technology causes the growth of information needs in all branches of human development. At present, there are all opportunities to increase information security by combining sources of information into a single system. At the same time, when merging, specific difficulties and features emerge, which together make it difficult to implement the proposed solutions. The paper considers the peculiarity of combining different types of radar stations into a single information system. Errors of measurements of separate parameters and their influence on system characteristics are considered. Options for solving the problems that have arisen are proposed.


2002 ◽  
Vol 1802 (1) ◽  
pp. 105-114 ◽  
Author(s):  
R. Tapio Luttinen

The Highway Capacity Manual (HCM) 2000 provides methods to estimate performance measures and the level of service for different types of traffic facilities. Because neither the input data nor the model parameters are totally accurate, there is an element of uncertainty in the results. An analytical method was used to estimate the uncertainty in the service measures of two-lane highways. The input data and the model parameters were considered as random variables. The propagation of error through the arithmetic operations in the HCM 2000 methodology was estimated. Finally, the uncertainty in the average travel speed and percent time spent following was analyzed, and four approaches were considered to deal with uncertainty in the level of service.


Sign in / Sign up

Export Citation Format

Share Document