scholarly journals Using experimental data and information criteria to guide model selection for reaction–diffusion problems in mathematical biology

2018 ◽  
Author(s):  
David J. Warne ◽  
Ruth E. Baker ◽  
Matthew J. Simpson

AbstractReaction–diffusion models describing the movement, reproduction and death of individuals within a population are key mathematical modelling tools with widespread applications in mathematical biology. A diverse range of such continuum models have been applied in various biological contexts by choosing different flux and source terms in the reaction–diffusion framework. For example, to describe collective spreading of cell populations, the flux term may be chosen to reflect various movement mechanisms, such as random motion (diffusion), adhesion, haptotaxis, chemokinesis and chemotaxis. The choice of flux terms in specific applications, such as wound healing, is usually made heuristically, and rarely is it tested quantitatively against detailed cell density data. More generally, in mathematical biology, the questions of model validation and model selection have not received the same attention as the questions of model development and model analysis. Many studies do not consider model validation or model selection, and those that do often base the selection of the model on residual error criteria after model calibration is performed using nonlinear regression techniques. In this work, we present a model selection case study, in the context of cell invasion, with a very detailed experimental data set. Using Bayesian analysis and information criteria, we demonstrate that model selection and model validation should account for both residual errors and model complexity. These considerations are often overlooked in the mathematical biology literature. The results we present here provide a clear methodology that can be used to guide model selection across a range of applications. Furthermore, the case study we present provides a clear example where neglecting the role of model complexity can give rise to misleading outcomes.

Author(s):  
D. Bulatov ◽  
S. Wenzel ◽  
G. Häufel ◽  
J. Meidow

Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using <ttt>circlePeucker</ttt> and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.


2020 ◽  
Vol 181 ◽  
pp. 107134 ◽  
Author(s):  
Giovanni Ciampi ◽  
Michelangelo Scorpio ◽  
Yorgos Spanodimitriou ◽  
Antonio Rosato ◽  
Sergio Sibilio

Author(s):  
Władysław Homenda ◽  
Agnieszka Jastrzȩbska ◽  
Witold Pedrycz ◽  
Fusheng Yu

AbstractIn this paper, we look closely at the issue of contaminated data sets, where apart from legitimate (proper) patterns we encounter erroneous patterns. In a typical scenario, the classification of a contaminated data set is always negatively influenced by garbage patterns (referred to as foreign patterns). Ideally, we would like to remove them from the data set entirely. The paper is devoted to comparison and analysis of three different models capable to perform classification of proper patterns with rejection of foreign patterns. It should be stressed that the studied models are constructed using proper patterns only, and no knowledge about the characteristics of foreign patterns is needed. The methods are illustrated with a case study of handwritten digits recognition, but the proposed approach itself is formulated in a general manner. Therefore, it can be applied to different problems. We have distinguished three structures: global, local, and embedded, all capable to eliminate foreign patterns while performing classification of proper patterns at the same time. A comparison of the proposed models shows that the embedded structure provides the best results but at the cost of a relatively high model complexity. The local architecture provides satisfying results and at the same time is relatively simple.


2015 ◽  
Vol 27 (9) ◽  
pp. 1857-1871 ◽  
Author(s):  
Chee-Ming Ting ◽  
Abd-Krim Seghouane ◽  
Muhammad Usman Khalid ◽  
Sh-Hussain Salleh

We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types—a resting state, an event-related design, and a block design data set—with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback’s IC (KIC) based on Kullback’s symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.


Author(s):  
Olivier Francois ◽  
Guillaume Laval

Approximate Bayesian computation (ABC) is a class of algorithmic methods in Bayesian inference using statistical summaries and computer simulations. ABC has become popular in evolutionary genetics and in other branches of biology. However, model selection under ABC algorithms has been a subject of intense debate during the recent years. Here, we propose novel approaches to model selection based on posterior predictive distributions and approximations of the deviance. We argue that this framework can settle some contradictions between the computation of model probabilities and posterior predictive checks using ABC posterior distributions. A simulation study and an analysis of a resequencing data set of human DNA show that the deviance criteria lead to sensible results in a number of model choice problems of interest to population geneticists.


2012 ◽  
Vol 18 ◽  
pp. 150-155
Author(s):  
NILZA PIRES

There are many cosmological scenarios that try to explain the observed current accelerated expansion. They are based or on the existence of new fields in nature or on the modification of the gravitation theory. This work investigates the observational viability of a modified gravity f(R) = R - α/ Rn within the Palatine approach, in the light of 32 age measurements of passively evolving galaxies and the baryonic acoustic oscillation peak scale. By using information-criteria model selection, this scenario is compared with the DGP alternative cosmological model as well the adjustable components of energy based in the general relativity based int the observational data set.


1981 ◽  
Vol 20 (04) ◽  
pp. 207-212 ◽  
Author(s):  
J. Hermans ◽  
B. van Zomeren ◽  
J. W. Raatgever ◽  
P. J. Sterk ◽  
J. D. F. Habbema

By means of a case study the choice between several methods of discriminant analysis is presented. Experimental data of a two-groups problem with one or two variables is analysed. The different methods are compared according to posterior probabilities which can be computed for each subject and which are the basis of discriminant analysis. These posterior probabilities are analysed graphically as well as numerically.


Sign in / Sign up

Export Citation Format

Share Document