scholarly journals Sequential Parameter Estimation for Mammalian Cell Model Based on In Silico Design of Experiments

Processes ◽  
2018 ◽  
Vol 6 (8) ◽  
pp. 100 ◽  
Author(s):  
Zhenyu Wang ◽  
Hana Sheikh ◽  
Kyongbum Lee ◽  
Christos Georgakis

Due to the complicated metabolism of mammalian cells, the corresponding dynamic mathematical models usually consist of large sets of differential and algebraic equations with a large number of parameters to be estimated. On the other hand, the measured data for estimating the model parameters are limited. Consequently, the parameter estimates may converge to a local minimum far from the optimal ones, especially when the initial guesses of the parameter values are poor. The methodology presented in this paper provides a systematic way for estimating parameters sequentially that generates better initial guesses for parameter estimation and improves the accuracy of the obtained metabolic model. The model parameters are first classified into four subsets of decreasing importance, based on the sensitivity of the model’s predictions on the parameters’ assumed values. The parameters in the most sensitive subset, typically a small fraction of the total, are estimated first. When estimating the remaining parameters with next most sensitive subset, the subsets of parameters with higher sensitivities are estimated again using their previously obtained optimal values as the initial guesses. The power of this sequential estimation approach is illustrated through a case study on the estimation of parameters in a dynamic model of CHO cell metabolism in fed-batch culture. We show that the sequential parameter estimation approach improves model accuracy and that using limited data to estimate low-sensitivity parameters can worsen model performance.

Processes ◽  
2018 ◽  
Vol 6 (11) ◽  
pp. 231 ◽  
Author(s):  
Ernie Che Mid ◽  
Vivek Dua

In this work, a methodology for fault detection in wastewater treatment systems, based on parameter estimation, using multiparametric programming is presented. The main idea is to detect faults by estimating model parameters, and monitoring the changes in residuals of model parameters. In the proposed methodology, a nonlinear dynamic model of wastewater treatment was discretized to algebraic equations using Euler’s method. A parameter estimation problem was then formulated and transformed into a square system of parametric nonlinear algebraic equations by writing the optimality conditions. The parametric nonlinear algebraic equations were then solved symbolically to obtain the concentration of substrate in the inflow, , inhibition coefficient, , and specific growth rate, , as an explicit function of state variables (concentration of biomass, ; concentration of organic matter, ; concentration of dissolved oxygen, ; and volume, ). The estimated model parameter values were compared with values from the normal operation. If the residual of model parameters exceeds a certain threshold value, a fault is detected. The application demonstrates the viability of the approach, and highlights its ability to detect faults in wastewater treatment systems by providing quick and accurate parameter estimates using the evaluation of explicit parametric functions.


2005 ◽  
Vol 6 (2) ◽  
pp. 156-172 ◽  
Author(s):  
Yuqiong Liu ◽  
Hoshin V. Gupta ◽  
Soroosh Sorooshian ◽  
Luis A. Bastidas ◽  
William J. Shuttleworth

Abstract In coupled land surface–atmosphere modeling, the possibility and benefits of constraining model parameters using observational data bear investigation. Using the locally coupled NCAR Single-column Community Climate Model (NCAR SCCM), this study demonstrates some feasible, effective approaches to constrain parameter estimates for coupled land–atmosphere models and explores the effects of including both land surface and atmospheric parameters and fluxes/variables in the parameter estimation process, as well as the value of conducting the process in a stepwise manner. The results indicate that the use of both land surface and atmospheric flux variables to construct error criteria can lead to better-constrained parameter sets. The model with “optimal” parameters generally performs better than when a priori parameters are used, especially when some atmospheric parameters are included in the parameter estimation process. The overall conclusion is that, to achieve balanced, reasonable model performance on all variables, it is desirable to optimize both land surface and atmospheric parameters and use both land surface and atmospheric fluxes/variables for error criteria in the optimization process. The results also show that, for a coupled land–atmosphere model, there are potential advantages to using a stepwise procedure in which the land surface parameters are first identified in offline mode, after which the atmospheric parameters are determined in coupled mode. This stepwise scheme appears to provide comparable solutions to a fully coupled approach, but with considerably reduced computational time. The trade-off in the ability of a model to satisfactorily simulate different processes simultaneously, as observed in most multicriteria studies, is most evident for sensible heat and precipitation in this study for the NCAR SCCM.


2017 ◽  
Vol 48 (1) ◽  
pp. 339-374
Author(s):  
Greg Taylor

AbstractThe hierarchical credibility model was introduced, and extended, in the 70s and early 80s. It deals with the estimation of parameters that characterize the nodes of a tree structure. That model is limited, however, by the fact that its parameters are assumed fixed over time. This causes the model's parameter estimates to track the parameters poorly when the latter are subject to variation over time. This paper seeks to remove this limitation by assuming the parameters in question to follow a process akin to a random walk over time, producing an evolutionary hierarchical model. The specific form of the model is compatible with the use of the Kalman filter for parameter estimation and forecasting. The application of the Kalman filter is conceptually straightforward, but the tree structure of the model parameters can be extensive, and some effort is required to retain organization of the updating algorithm. This is achieved by suitable manipulation of the graph associated with the tree. The graph matrix then appears in the matrix calculations inherent in the Kalman filter. A numerical example is included to illustrate the application of the filter to the model.


Author(s):  
Serge Hoogendoorn ◽  
Raymond Hoogendoorn

Parameter identification of microscopic driving models is a difficult task. This is caused by the fact that parameters—such as reaction time, sensitivity to stimuli, etc.—are generally not directly observable from common traffic data, but also due to the lack of reliable statistical estimation techniques. This contribution puts forward a new approach to identifying parameters of car-following models. One of the main contributions of this article is that the proposed approach allows for joint estimation of parameters using different data sources, including prior information on parameter values (or the valid range of values). This is achieved by generalizing the maximum-likelihood estimation approach proposed by the authors in previous work. The approach allows for statistical analysis of the parameter estimates, including the standard error of the parameter estimates and the correlation of the estimates. Using the likelihood-ratio test, models of different complexity (defined by the number of model parameters) can be cross-compared. A nice property of this test is that it takes into account the number of parameters of a model as well as the performance. To illustrate the workings, the approach is applied to two car-following models using vehicle trajectories of a Dutch freeway collected from a helicopter, in combination with data collected with a driving simulator.


Author(s):  
James R. McCusker ◽  
Kourosh Danai

A method of parameter estimation was recently introduced that separately estimates each parameter of the dynamic model [1]. In this method, regions coined as parameter signatures, are identified in the time-scale domain wherein the prediction error can be attributed to the error of a single model parameter. Based on these single-parameter associations, individual model parameters can then be estimated for iterative estimation. Relative to nonlinear least squares, the proposed Parameter Signature Isolation Method (PARSIM) has two distinct attributes. One attribute of PARSIM is to leave the estimation of a parameter dormant when a parameter signature cannot be extracted for it. Another attribute is independence from the contour of the prediction error. The first attribute could cause erroneous parameter estimates, when the parameters are not adapted continually. The second attribute, on the other hand, can provide a safeguard against local minima entrapments. These attributes motivate integrating PARSIM with a method, like nonlinear least-squares, that is less prone to dormancy of parameter estimates. The paper demonstrates the merit of the proposed integrated approach in application to a difficult estimation problem.


2020 ◽  
Vol 126 (4) ◽  
pp. 559-570 ◽  
Author(s):  
Ming Wang ◽  
Neil White ◽  
Jim Hanan ◽  
Di He ◽  
Enli Wang ◽  
...  

Abstract Background and Aims Functional–structural plant (FSP) models provide insights into the complex interactions between plant architecture and underlying developmental mechanisms. However, parameter estimation of FSP models remains challenging. We therefore used pattern-oriented modelling (POM) to test whether parameterization of FSP models can be made more efficient, systematic and powerful. With POM, a set of weak patterns is used to determine uncertain parameter values, instead of measuring them in experiments or observations, which often is infeasible. Methods We used an existing FSP model of avocado (Persea americana ‘Hass’) and tested whether POM parameterization would converge to an existing manual parameterization. The model was run for 10 000 parameter sets and model outputs were compared with verification patterns. Each verification pattern served as a filter for rejecting unrealistic parameter sets. The model was then validated by running it with the surviving parameter sets that passed all filters and then comparing their pooled model outputs with additional validation patterns that were not used for parameterization. Key Results POM calibration led to 22 surviving parameter sets. Within these sets, most individual parameters varied over a large range. One of the resulting sets was similar to the manually parameterized set. Using the entire suite of surviving parameter sets, the model successfully predicted all validation patterns. However, two of the surviving parameter sets could not make the model predict all validation patterns. Conclusions Our findings suggest strong interactions among model parameters and their corresponding processes, respectively. Using all surviving parameter sets takes these interactions into account fully, thereby improving model performance regarding validation and model output uncertainty. We conclude that POM calibration allows FSP models to be developed in a timely manner without having to rely on field or laboratory experiments, or on cumbersome manual parameterization. POM also increases the predictive power of FSP models.


2009 ◽  
Vol 63 (3) ◽  
Author(s):  
Michal Čižniar ◽  
Marián Podmajerský ◽  
Tomáš Hirmajer ◽  
Miroslav Fikar ◽  
Abderrazak Latifi

AbstractThe estimation of parameters in semi-empirical models is essential in numerous areas of engineering and applied science. In many cases, these models are described by a set of ordinary-differential equations or by a set of differential-algebraic equations. Due to the presence of non-convexities of functions participating in these equations, current gradient-based optimization methods can guarantee only locally optimal solutions. This deficiency can have a marked impact on the operation of chemical processes from the economical, environmental and safety points of view and it thus motivates the development of global optimization algorithms. This paper presents a global optimization method which guarantees ɛ-convergence to the global solution. The approach consists in the transformation of the dynamic optimization problem into a nonlinear programming problem (NLP) using the method of orthogonal collocation on finite elements. Rigorous convex underestimators of the nonconvex NLP problem are employed within the spatial branch-and-bound method and solved to global optimality. The proposed method was applied to two example problems dealing with parameter estimation from time series data.


2017 ◽  
Vol 49 (4) ◽  
pp. 1042-1055 ◽  
Author(s):  
Shushobhit Chaudhary ◽  
C. T. Dhanya ◽  
Arun Kumar

Abstract Calibration is the most critical phase in any water quality modelling process. This study proposes a sequential calibration methodology for any water quality model using reach-specific estimates of model parameters, which would aid in the improved prediction of river water quality characteristics. The proposed methodology accounts for the heterogeneity of river reaches, i.e., diverse characteristics of different reaches on the river stretch. The water quality model, QUAL2K, is coupled with MATLAB, a computing platform, to facilitate sequential estimation of reach-wise model parameters using a grid-based weighted average optimization. The Delhi segment of the Yamuna River is selected as study river stretch. Observations of water quality variables, dissolved oxygen and biochemical oxygen demand are used to calibrate and validate QUAL2K. Desirable performance measures are obtained during the calibration and the validation period. The methodology proves superior to the existing calibration methodologies applied over the study region. The proposed technique also captures the system behaviour effectively, through a systematic, efficient and user-friendly way. The proposed approach is expected to aid decision-makers in formulating better reach-wise management decisions and treatment policies by providing a simpler and efficient way to simulate water quality parameters.


2011 ◽  
Vol 15 (11) ◽  
pp. 3591-3603 ◽  
Author(s):  
R. Singh ◽  
T. Wagener ◽  
K. van Werkhoven ◽  
M. E. Mann ◽  
R. Crane

Abstract. Projecting how future climatic change might impact streamflow is an important challenge for hydrologic science. The common approach to solve this problem is by forcing a hydrologic model, calibrated on historical data or using a priori parameter estimates, with future scenarios of precipitation and temperature. However, several recent studies suggest that the climatic regime of the calibration period is reflected in the resulting parameter estimates and model performance can be negatively impacted if the climate for which projections are made is significantly different from that during calibration. So how can we calibrate a hydrologic model for historically unobserved climatic conditions? To address this issue, we propose a new trading-space-for-time framework that utilizes the similarity between the predictions under change (PUC) and predictions in ungauged basins (PUB) problems. In this new framework we first regionalize climate dependent streamflow characteristics using 394 US watersheds. We then assume that this spatial relationship between climate and streamflow characteristics is similar to the one we would observe between climate and streamflow over long time periods at a single location. This assumption is what we refer to as trading-space-for-time. Therefore, we change the limits for extrapolation to future climatic situations from the restricted locally observed historical variability to the variability observed across all watersheds used to derive the regression relationships. A typical watershed model is subsequently calibrated (conditioned) on the predicted signatures for any future climate scenario to account for the impact of climate on model parameters within a Bayesian framework. As a result, we can obtain ensemble predictions of continuous streamflow at both gauged and ungauged locations. The new method is tested in five US watersheds located in historically different climates using synthetic climate scenarios generated by increasing mean temperature by up to 8 °C and changing mean precipitation by −30% to +40% from their historical values. Depending on the aridity of the watershed, streamflow projections using adjusted parameters became significantly different from those using historically calibrated parameters if precipitation change exceeded −10% or +20%. In general, the trading-space-for-time approach resulted in a stronger watershed response to climate change for both high and low flow conditions.


1993 ◽  
Vol 27 (9) ◽  
pp. 1034-1039 ◽  
Author(s):  
Ene I. Ette ◽  
Andrew W. Kelman ◽  
Catherine A. Howie ◽  
Brian Whiting

OBJECTIVE: To develop new approaches for evaluating results obtained from simulation studies used to determine sampling strategies for efficient estimation of population pharmacokinetic parameters. METHODS: One-compartment kinetics with intravenous bolus injection was assumed and the simulated data (one observation made on each experimental unit [human subject or animal]), were analyzed using NONMEM. Several approaches were used to judge the efficiency of parameter estimation. These included: (1) individual and joint confidence intervals (CIs) coverage for parameter estimates that were computed in a manner that would reveal the influence of bias and standard error (SE) on interval estimates; (2) percent prediction error (%PE) approach; (3) the incidence of high pair-wise correlations; and (4) a design number approach. The design number (Φ) is a new statistic that provides a composite measure of accuracy and precision (using SE). RESULTS: The %PE approach is useful only in examining the efficiency of estimation of a parameter considered independently. The joint CI coverage approach permitted assessment of the accuracy and reliability of all model parameter estimates. The Φ approach is an efficient method of achieving an accurate estimate of parameter(s) with good precision. Both the Φ for individual parameter estimation and the overall Φ for the estimation of model parameters led to optimal experimental design. CONCLUSIONS: Application of these approaches to the analyses of the results of the study was found useful in determining the best sampling design (from a series of two sampling times designs within a study) for efficient estimation of population pharmacokinetic parameters.


Sign in / Sign up

Export Citation Format

Share Document