scholarly journals Determining a reasonable range of relative numerical tolerance values for simulating deterministic models of biochemical reactions

2018 ◽  
Author(s):  
Ming Yang ◽  
Louis Z. Yang

ABSTRACTWhat values of relative numerical tolerance should be chosen in simulation of a deterministic model of a biochemical reaction is unclear, which impairs the modeling effort since the simulation outcomes of a model may depend on the relative numerical tolerance values. In an attempt to provide a guideline to selecting appropriate numerical tolerance values in simulation of in vivo biochemical reactions, reasonable numerical tolerance values were estimated based on the uncertainty principle and assumptions of related cellular parameters. The calculations indicate that relative numerical tolerance values can be reasonably set at or around 10−4 for the concentrations expressed in ng/L. This work also suggests that further reducing relative numerical values may result in erroneous simulation results.

Author(s):  
Anindo Bhattacharjee

The romanticism of management for numbers, metrics and deterministic models driven by mathematics, is not new. It still exists. This is exactly the problem which classical physicists had in the late 19th century until Werner Heisenberg brought the uncertainty principle and opened the doors of quantum physics that challenged the deterministic view of the physical world mostly driven by the Newtonian view. In this paper, we propose an uncertainty principle of management and then list a set of factors which capture this uncertainty quite well and arrive at a new view of scientific management thought. The new view which we call as the Quantum view of Management (QVM) will be based on the major tenets from the ancient philosophical traditions viz., Jainism, Taoism, Advaita Vedanta, Buddhism, Greek philosophers (like Hereclitus) etc.


2021 ◽  
Vol 20 (5) ◽  
pp. 1-34
Author(s):  
Edward A. Lee

This article is about deterministic models, what they are, why they are useful, and what their limitations are. First, the article emphasizes that determinism is a property of models, not of physical systems. Whether a model is deterministic or not depends on how one defines the inputs and behavior of the model. To define behavior, one has to define an observer. The article compares and contrasts two classes of ways to define an observer, one based on the notion of “state” and another that more flexibly defines the observables. The notion of “state” is shown to be problematic and lead to nondeterminism that is avoided when the observables are defined differently. The article examines determinism in models of the physical world. In what may surprise many readers, it shows that Newtonian physics admits nondeterminism and that quantum physics may be interpreted as a deterministic model. Moreover, it shows that both relativity and quantum physics undermine the notion of “state” and therefore require more flexible ways of defining observables. Finally, the article reviews results showing that sufficiently rich sets of deterministic models are incomplete. Specifically, nondeterminism is inescapable in any system of models rich enough to encompass Newton’s laws.


Author(s):  
Xiaolan Han ◽  
Shengdun Zhao ◽  
Chen Liu ◽  
Chao Chen ◽  
Fan Xu

Due to the importance of geometrical design of clinching tools, the clinching process with extensible dies was investigated numerically and experimentally to seek for optimal parameters of clinching tools in this study. The joining parameters, including punch corner radius, sliding distance, die depth and bottom thickness, were optimized using the orthogonal experimental design simulation method based on the evaluation of tensile strength. The simulation results were validated through an experimental setup testing on material aluminum alloy Al5052. The orthogonal experimental design simulation results showed reasonably good agreement with the experimental results. To further investigate the validation of the simulation model, the different bottom thicknesses within a reasonable range of value were studied. The results also indicated that the simulation model could be employed to predict the joint forming by the clinching process with extensible dies.


2016 ◽  
Vol 16 (24) ◽  
pp. 15629-15652 ◽  
Author(s):  
Ioannis Kioutsioukis ◽  
Ulas Im ◽  
Efisio Solazzo ◽  
Roberto Bianconi ◽  
Alba Badia ◽  
...  

Abstract. Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each station's best deterministic model at no more than 60 % of the sites, indicating a combination of members with unbalanced skill difference and error dependence for the rest. The promotion of the right amount of accuracy and diversity within the ensemble results in an average additional skill of up to 31 % compared to using the full ensemble in an unconditional way. The skill improvements were higher for O3 and lower for PM10, associated with the extent of potential changes in the joint distribution of accuracy and diversity in the ensembles. The skill enhancement was superior using the weighting scheme, but the training period required to acquire representative weights was longer compared to the sub-selecting schemes. Further development of the method is discussed in the conclusion.


2017 ◽  
Author(s):  
Nuno R. Nené ◽  
Alistair S. Dunham ◽  
Christopher J. R. Illingworth

ABSTRACTA common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the non-deterministic properties of mutation in a finite population. We propose an alternative approach which corrects for this error, which we denote the delay-deterministic model. Applying our model to a simple evolutionary system we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model.


2021 ◽  
Author(s):  
Hyukpyo Hong ◽  
Jinsu Kim ◽  
M Ali Al-Radhawi ◽  
Eduardo D. Sontag ◽  
Jae Kyoung Kim

Long-term behaviors of biochemical reaction networks (BRNs) are described by steady states in deterministic models and stationary distributions in stochastic models. Unlike deterministic steady states, stationary distributions capturing inherent fluctuations of reactions are extremely difficult to derive analytically due to the curse of dimensionality. Here, we develop a method to derive analytic stationary distributions from deterministic steady states by transforming BRNs to have a special dynamic property, called complex balancing. Specifically, we merge nodes and edges of BRNs to match in- and out-flows of each node. This allows us to derive the stationary distributions of a large class of BRNs, including autophosphorylation networks of EGFR, PAK1, and Aurora B kinase and a genetic toggle switch. This reveals the unique properties of their stochastic dynamics such as robustness, sensitivity and multi-modality. Importantly, we provide a user-friendly computational package, CASTANET, that automatically derives sym- bolic expressions of the stationary distributions of BRNs to understand their long-term stochasticity.


Land ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 971
Author(s):  
Bradley Franklin ◽  
Kurt Schwabe ◽  
Lucia Levers

During California’s severe drought from 2011 to 2017, a significant shift in irrigated area from annual to perennial crops occurred. Due to the time requirements associated with bringing perennial crops to maturity, more perennial acreage likely increases the opportunity costs of fallowing, a common drought mitigation strategy. Increases in the costs of fallowing may put additional pressure on another common “go-to” drought mitigation strategy—groundwater pumping. Yet, overdrafted groundwater systems worldwide are increasingly becoming the norm. In response to depleting aquifers, as evidenced in California, sustainable groundwater management policies are being implemented. There has been little modeling of the potential effect of increased perennial crop production on groundwater use and the implications for public policy. A dynamic, integrated deterministic model of agricultural production in Kern County, CA, is developed here with both groundwater and perennial area by vintage treated as stock variables. Model scenarios investigate the impacts of surface water reductions and perennial prices on land and groundwater use. The results generally indicate that perennial production may lead to slower aquifer draw-down compared with deterministic models lacking perennial crop dynamics, highlighting the importance of accounting for the dynamic nature of perennial crops in understanding the co-evolution of agricultural and groundwater systems under climate change.


2019 ◽  
Vol 11 (2) ◽  
pp. 229-245
Author(s):  
Fatemeh Delkhosh ◽  
Seyed Jafar Sadjadi

AbstractThe growing demand for fuels combined with the fact that there are limited fossil fuel resources has led the world to seek renewable energy resources such as biofuels. Micro-algae can be an efficient source of biofuel energy, since it significantly reduces air pollution. In this paper, we develop a micro-algae biofuel supply chain through a two-stage approach. This study aims to commercialize micro-algae as a new source of energy. In the first stage, we utilize the Best-Worst Method (BWM) to determine the best cultivation system, and in the second stage, a bi-objective mathematical model is presented which simultaneously optimizes the economic and environmental objectives. We also propose a robust optimization model to deal with the uncertain nature of the biofuel supply chain. Our analysis on the trade-off between the supply chain’s total cost and unfulfillment demand arrives at interesting managerial insights. Furthermore, to show the effectiveness of the robust optimization model, we compare the performance of the robust and deterministic models, and the results show that the robust model dominates over the deterministic model in all scenarios. Finally, sensitivity analysis on critical parameters is conducted to help decision-makers find the optimal trade-off between investment and its benefits.


2020 ◽  
Vol 100 (3-4) ◽  
pp. 753-764 ◽  
Author(s):  
M. Nazmul Huda ◽  
Pengcheng Liu ◽  
Chitta Saha ◽  
Hongnian Yu

AbstractThis paper presents a miniature hybrid capsule robot for minimally invasive in-vivo interventions such as capsule endoscopy within the GI (gastrointestinal) tract. It proposes new modes of operation for the hybrid robot namely hybrid mode and anchoring mode. The hybrid mode assists the robot to open an occlusion or to widen a narrowing. The anchoring mode enables the robot to stay in a specific place overcoming external disturbances (e.g. peristalsis) for a better and prolonged observation. The modelling of the legged, hybrid and anchoring modes are presented and analysed. Simulation results show robot propulsions in various modes. The hybrid capsule robot consisting four operating modes is more effective for the locomotion and observation within GI tract when compared to the locomotion consisting a single mean of locomotion as the hybrid robot can switch among the operating modes to suit the situation/task.


Sign in / Sign up

Export Citation Format

Share Document