scholarly journals A data-model synthesis to explain variability in calcification observed during a CO<sub>2</sub> perturbation mesocosm experiment

2016 ◽  
Author(s):  
Shubham Krishna ◽  
Markus Schartau

Abstract. A series of studies were conducted during the last two decades to investigate effects of ocean acidification (OA) on phytoplankton physiology, plankton ecology, and biogeochemical dynamics of marine ecosystems. Among those studies are experiments with tanks or bags called mesocosms, with some enclosed water volume that typically comprised a natural plankton community found in the surrounding environment. The Pelagic Ecosystem CO2 – Enrichment Study PeECE-I experiment is one such study, where mesocosms were perturbed and exposed to different carbon dioxide (CO2) concentrations to determine responses in growth dynamics of the coccolithophorid Emiliania huxleyi, a marine calcifying algae. The data from replicate mesocosms of PeECE-I show some natural variability and significant differences were revealed in the accumulation of particulate inorganic carbon (PIC) between mesocosms of similar CO2 treatments. In our study we reanalyse PeECE-I data and apply an optimality-based model approach to understand most of the variability observed, with major focus on total alkalinity (TA) and calcification. We explore how much of the observed variability in data can be explained by variations of initial conditions and by the effect of CO2 perturbations. According to our model approach, changes in cellular calcite formation are resolved at the organism-level in response to variations in CO2. With a data assimilation (DA) method we obtain three distinctive ensembles of model solutions, with low, medium and high calcification rates. Optimised values of initial conditions turned out to be correlated with estimates physiological model parameters. The spread of ensemble model solutions captures most of the observed variability, corresponding to the combinations of parameter estimates. Optimised model solutions of the high CO2 treatment are shown to systematically overestimate observed PIC production. Thus, the simulated CO2 effect on calcification is likely too weak. At the same time our model results yield large differences in optimal mass flux estimates of carbon and of nitrogen even between mesocosms exposed to similar CO2 conditions.

2008 ◽  
Vol 10 (2) ◽  
pp. 153-162 ◽  
Author(s):  
B. G. Ruessink

When a numerical model is to be used as a practical tool, its parameters should preferably be stable and consistent, that is, possess a small uncertainty and be time-invariant. Using data and predictions of alongshore mean currents flowing on a beach as a case study, this paper illustrates how parameter stability and consistency can be assessed using Markov chain Monte Carlo. Within a single calibration run, Markov chain Monte Carlo estimates the parameter posterior probability density function, its mode being the best-fit parameter set. Parameter stability is investigated by stepwise adding new data to a calibration run, while consistency is examined by calibrating the model on different datasets of equal length. The results for the present case study indicate that various tidal cycles with strong (say, &gt;0.5 m/s) currents are required to obtain stable parameter estimates, and that the best-fit model parameters and the underlying posterior distribution are strongly time-varying. This inconsistent parameter behavior may reflect unresolved variability of the processes represented by the parameters, or may represent compensational behavior for temporal violations in specific model assumptions.


1991 ◽  
Vol 18 (2) ◽  
pp. 320-327 ◽  
Author(s):  
Murray A. Fitch ◽  
Edward A. McBean

A model is developed for the prediction of river flows resulting from combined snowmelt and precipitation. The model employs a Kalman filter to reflect uncertainty both in the measured data and in the system model parameters. The forecasting algorithm is used to develop multi-day forecasts for the Sturgeon River, Ontario. The algorithm is shown to develop good 1-day and 2-day ahead forecasts, but the linear prediction model is found inadequate for longer-term forecasts. Good initial parameter estimates are shown to be essential for optimal forecasting performance. Key words: Kalman filter, streamflow forecast, multi-day, streamflow, Sturgeon River, MISP algorithm.


2011 ◽  
Vol 64 (S1) ◽  
pp. S3-S18 ◽  
Author(s):  
Yuanxi Yang ◽  
Jinlong Li ◽  
Junyi Xu ◽  
Jing Tang

Integrated navigation using multiple Global Navigation Satellite Systems (GNSS) is beneficial to increase the number of observable satellites, alleviate the effects of systematic errors and improve the accuracy of positioning, navigation and timing (PNT). When multiple constellations and multiple frequency measurements are employed, the functional and stochastic models as well as the estimation principle for PNT may be different. Therefore, the commonly used definition of “dilution of precision (DOP)” based on the least squares (LS) estimation and unified functional and stochastic models will be not applicable anymore. In this paper, three types of generalised DOPs are defined. The first type of generalised DOP is based on the error influence function (IF) of pseudo-ranges that reflects the geometry strength of the measurements, error magnitude and the estimation risk criteria. When the least squares estimation is used, the first type of generalised DOP is identical to the one commonly used. In order to define the first type of generalised DOP, an IF of signal–in-space (SIS) errors on the parameter estimates of PNT is derived. The second type of generalised DOP is defined based on the functional model with additional systematic parameters induced by the compatibility and interoperability problems among different GNSS systems. The third type of generalised DOP is defined based on Bayesian estimation in which the a priori information of the model parameters is taken into account. This is suitable for evaluating the precision of kinematic positioning or navigation. Different types of generalised DOPs are suitable for different PNT scenarios and an example for the calculation of these DOPs for multi-GNSS systems including GPS, GLONASS, Compass and Galileo is given. New observation equations of Compass and GLONASS that may contain additional parameters for interoperability are specifically investigated. It shows that if the interoperability of multi-GNSS is not fulfilled, the increased number of satellites will not significantly reduce the generalised DOP value. Furthermore, the outlying measurements will not change the original DOP, but will change the first type of generalised DOP which includes a robust error IF. A priori information of the model parameters will also reduce the DOP.


2021 ◽  
Vol 13 (5) ◽  
pp. 771-780
Author(s):  
Shou-Kai Chen ◽  
Bo-Wen Xu

The adiabatic temperature rise model of mass concrete is very important for temperature field simulation, same to crack resistance capacity and temperature control of concrete structures. In this research, a thermal kinetics analysis was performed to study the exothermic hydration reaction process of concrete, and an adiabatic temperature rise model was proposed. The proposed model considers influencing factors, including initial temperature, temperature history, activation energy, and the completion degree of adiabatic temperature rise and is theoretically mature and definitive in physical meaning. It was performed on different initial temperatures for adiabatic temperature rise test; the data were employed in a regression analysis of the model parameters and initial conditions. The same function was applied to describe the dynamic change of the adiabatic temperature rise rates for different initial temperatures and different temperature changing processes and subsequently employed in a finite element analysis of the concrete temperature field. The test results indicated that the proposed model adequately fits the data of the adiabatic temperature rise test, which included different initial temperatures, and accurately predicts the changing pattern of adiabatic temperature rise of concrete at different initial temperatures. Compared with the results using the traditional age-based adiabatic temperature rise model, the results of a calculation example revealed that the simulated calculation results using the proposed model can accurately reflect the temperature change pattern of concrete in heat dissipation conditions.


2013 ◽  
Vol 756-759 ◽  
pp. 4377-4381
Author(s):  
Jing Hou ◽  
Jin Xiang Pian ◽  
Yan Ling Sun ◽  
Ke Xu

In order to improve the control accuracy of the coiling temperature of strip in the laminar cooling process when working condition is varying, an intelligent setting method of the cooling water volume is researched in this paper. The strip coiling temperature mechanism model is built firstly. Secondly, the key model parameters are identified with case-based reasoning (CBR) technology to improve the model accuracy. Lastly, the cooling water volume setting method based the model is proposed where disturbance input method is applied. The simulation result showed that the proposed method can improve the strip coiling temperature accuracy when the operation condition is changing. The strip coiling temperature accuracy can be improved due to the CBR technology which can adjust the key model parameters according to the varying operation condition. So, the setting values based the improved model are adjusted with the changing working condition, with self-adaptive ability.


2018 ◽  
Author(s):  
Adel Albaba ◽  
Massimiliano Schwarz ◽  
Corinna Wendeler ◽  
Bernard Loup ◽  
Luuk Dorren

Abstract. This paper presents a Discrete Element-based elasto-plastic-adhesive model which is adapted and tested for producing hillslope debris flows. The numerical model produces three phases of particle contacts: elastic, plastic and adhesion. The model capabilities of simulating different types of cohesive granular flows were tested with different ranges of flow velocities and heights. The basic model parameters, being the basal friction (&amp;varphi;b) and normal restitution coefficient (&amp;varepsilon;n), were calibrated using field experiments of hillslope debris flows impacting two sensors. Simulations of 50 m3 of material were carried out on a channelized surface that is 41 m long and 8 m wide. The calibration process was based on measurements of flow height, flow velocity and the pressure applied to a sensor. Results of the numerical model matched well those of the field data in terms of pressure and flow velocity while less agreement was observed for flow height. Those discrepancies in results were due in part to the deposition of material in the field test which are not reproducible in the model. A parametric study was conducted to further investigate that effect of model parameters and inclination angle on flow height, velocity and pressure. Results of best-fit model parameters against selected experimental tests suggested that a link might exist between the model parameters &amp;varphi;b and &amp;varepsilon;n and the initial conditions of the tested granular material (bulk density and water and fine contents). The good performance of the model against the full-scale field experiments encourages further investigation by conducting lab-scale experiments with detailed variation of water and fine content to better understand their link to the model's parameters.


1981 ◽  
Vol 240 (5) ◽  
pp. R259-R265 ◽  
Author(s):  
J. J. DiStefano

Design of optimal blood sampling protocols for kinetic experiments is discussed and evaluated, with the aid of several examples--including an endocrine system case study. The criterion of optimality is maximum accuracy of kinetic model parameter estimates. A simple example illustrates why a sequential experiment approach is required; optimal designs depend on the true model parameter values, knowledge of which is usually a primary objective of the experiment, as well as the structure of the model and the measurement error (e.g., assay) variance. The methodology is evaluated from the results of a series of experiments designed to quantify the dynamics of distribution and metabolism of three iodothyronines, T3, T4, and reverse-T3. This analysis indicates that 1) the sequential optimal experiment approach can be effective and efficient in the laboratory, 2) it works in the presence of reasonably controlled biological variation, producing sufficiently robust sampling protocols, and 3) optimal designs can be highly efficient designs in practice, requiring for maximum accuracy a number of blood samples equal to the number of independently adjustable model parameters, no more or less.


2006 ◽  
Vol 3 (1) ◽  
pp. 69-114 ◽  
Author(s):  
A. El Ouazzani Taibi ◽  
G. P. Zhang ◽  
A. Elfeki

Abstract. The research presented in this paper focuses on an application of a newly developed physically-based watershed model approach, which is called Representative Elementary Watershed (REW) approach. The study stressed the effects of uncertainty of input parameters on the watershed responses (i.e. simulated discharges). The approach was applied to the Zwalm catchment, which is an agriculture dominated watershed with a drainage area of 114.3 km2 located in East-Flanders, Belgium. Uncertainty analysis of the model parameters is limited to the saturated hydraulic conductivity because of its high influence on the watershed hydrologic behavior. The assessment of outputs uncertainty is performed using the Monte Carlo method. The ensemble statistical watershed responses and their uncertainties are calculated and compared with the measurements. The results show that the measured discharges are falling within the 95% confidence interval of the modeled discharge.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257958
Author(s):  
Miguel Navascués ◽  
Costantino Budroni ◽  
Yelena Guryanova

In the context of epidemiology, policies for disease control are often devised through a mixture of intuition and brute-force, whereby the set of logically conceivable policies is narrowed down to a small family described by a few parameters, following which linearization or grid search is used to identify the optimal policy within the set. This scheme runs the risk of leaving out more complex (and perhaps counter-intuitive) policies for disease control that could tackle the disease more efficiently. In this article, we use techniques from convex optimization theory and machine learning to conduct optimizations over disease policies described by hundreds of parameters. In contrast to past approaches for policy optimization based on control theory, our framework can deal with arbitrary uncertainties on the initial conditions and model parameters controlling the spread of the disease, and stochastic models. In addition, our methods allow for optimization over policies which remain constant over weekly periods, specified by either continuous or discrete (e.g.: lockdown on/off) government measures. We illustrate our approach by minimizing the total time required to eradicate COVID-19 within the Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler et al. (March, 2020).


Sign in / Sign up

Export Citation Format

Share Document