identifiability problems
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 1)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Vol 17 (10) ◽  
pp. e1009425
Author(s):  
Mario Castro ◽  
Rob J. de Boer

In their Commentary paper, Villaverde and Massonis (On testing structural identifiability by a simple scaling method: relying on scaling symmetries can be misleading) have commented on our paper in which we proposed a simple scaling method to test structural identifiability. Our scaling invariance method (SIM) tests for scaling symmetries only, and Villaverde and Massonis correctly show the SIM may fail to detect identifiability problems when a model has other types of symmetries. We agree with the limitations raised by these authors but, also, we emphasize that the method is still valuable for its applicability to a wide variety of models, its simplicity, and even as a tool to introduce the problem of identifiability to investigators with little training in mathematics.


2018 ◽  
Vol 15 (142) ◽  
pp. 20170871 ◽  
Author(s):  
Sanjay Pant

A new class of functions, called the ‘information sensitivity functions’ (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters, these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to (a) identify time intervals or regions in dynamical system behaviour where information about the parameters is concentrated; (b) assess the effect of measurement noise on the information gain for the parameters; (c) assess whether sufficient information in an experimental protocol (input, measurements and their frequency) is available to identify the parameters; (d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and (e) assess identifiability problems for particular sets of parameters.


2015 ◽  
Vol 6 ◽  
Author(s):  
Van Kinh Nguyen ◽  
Sebastian C. Binder ◽  
Alessandro Boianelli ◽  
Michael Meyer-Hermann ◽  
Esteban A. Hernandez-Vargas

2009 ◽  
Vol 25 (5) ◽  
pp. 055007 ◽  
Author(s):  
Carlo Domenico Pagani ◽  
Dario Pierotti

2008 ◽  
Vol 41 (2) ◽  
pp. 283-288 ◽  
Author(s):  
Jiangfeng Zhang ◽  
Xiaohua Xia

2007 ◽  
Vol 55 (10) ◽  
pp. 1-9 ◽  
Author(s):  
D. Orhon ◽  
G. Insel ◽  
O. Karahan

This paper provides an overview of common problems encountered when using oxygen uptake rate (OUR) measurements for the assessment of wastewater characteristics and process kinetics. Emphasis is placed upon pitfalls that would lead to significant errors. It covers model dependency of the OUR measurements and the need to select appropriate models; interpretation of OUR perturbations as a way to identify new model components and processes; the need for simultaneous observation of relevant model components and multicomponent modelling for appropriate evaluation of OUR measurements; parameter identifiability problems and the effect of active biomass concentration and the endogenous decay rate on model simulation and calibration. Relevant experimental OUR data from previous studies are presented to illustrate and underline common scientific pitfalls.


2005 ◽  
Vol 17 (2) ◽  
pp. 453-485 ◽  
Author(s):  
A. Menchero ◽  
R. Montes Diez ◽  
D. Ríos Insua ◽  
P. Müller

We show how Bayesian neural networks can be used for time-series analysis. We consider a block-based model building strategy to model linear and nonlinear features within the time series: a linear combination of a linear autoregression term and a feedforward neural network (FFNN) with an unknown number of hidden nodes. To allow for simpler models, we also consider these terms separately as competing models to select from. Model identifiability problems arise when FFNN sigmoidal activation functions exhibit almost linear behavior or when there are almost duplicate or irrelevant neural network nodes. New reversible-jump moves are proposed to facilitate model selection, mitigating model identifiability problems. We illustrate this methodology analyzing several time-series data examples.


2004 ◽  
Vol 36 (03) ◽  
pp. 774-790 ◽  
Author(s):  
Tim Bedford ◽  
Bo H. Lindqvist

Within reliability theory, identifiability problems arise through competing risks. If we have a series system of several components, and if that system is replaced or repaired to as good as new on failure, then the different component failures represent competing risks for the system. It is well known that the underlying component failure distributions cannot be estimated from the observable data (failure time and identity of failed component) without nontestable assumptions such as independence. In practice many systems are not subject to the ‘as good as new’ repair regime. Hence, the objective of this paper is to contrast the identifiability issues arising for different repair regimes. We consider the problem of identifying a model within a given class of probabilistic models for the system. Different models corresponding to different repair strategies are considered: a partial-repair model, where only the failing component is repaired; perfect repair, where all components are as good as new after a failure; and minimal repair, where components are only minimally repaired at failures. We show that on the basis of observing a single socket, the partial-repair model is identifiable, while the perfect- and minimal-repair models are not.


2004 ◽  
Vol 36 (3) ◽  
pp. 774-790 ◽  
Author(s):  
Tim Bedford ◽  
Bo H. Lindqvist

Within reliability theory, identifiability problems arise through competing risks. If we have a series system of several components, and if that system is replaced or repaired to as good as new on failure, then the different component failures represent competing risks for the system. It is well known that the underlying component failure distributions cannot be estimated from the observable data (failure time and identity of failed component) without nontestable assumptions such as independence. In practice many systems are not subject to the ‘as good as new’ repair regime. Hence, the objective of this paper is to contrast the identifiability issues arising for different repair regimes. We consider the problem of identifying a model within a given class of probabilistic models for the system. Different models corresponding to different repair strategies are considered: a partial-repair model, where only the failing component is repaired; perfect repair, where all components are as good as new after a failure; and minimal repair, where components are only minimally repaired at failures. We show that on the basis of observing a single socket, the partial-repair model is identifiable, while the perfect- and minimal-repair models are not.


Sign in / Sign up

Export Citation Format

Share Document