model selection uncertainty
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 1)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Chenjun Gao ◽  
Jingjing He ◽  
Xuefei Guan

Abstract Uncertainty in Non-Destructive Evaluation (NDE) arises from many sources, e.g., manufacturing variability, environmental noise, and inadequate measurement devices. The reliability of the NDE measurements is typically quantified by the probability of detection (POD). With the advent and technical developments of the simulation method and computer science, efforts have been devoted to generating and estimating the POD curve for Lamb wave damage detection. However, few studies have been reported on the POD evaluation considering model selection uncertainty. This paper presents a novel POD assessment method incorporating model selection uncertainty for Lamb wave damage detection. By treating the flaw quantification model as a discrete uncertain variable, a hierarchical probabilistic model for Lamb wave POD is formulated in the Bayesian framework. Uncertainties from the model choice, model parameters, and other variables can be explicitly incorporated using the proposed method. The Bayes factor is used to evaluate the performance of models. The posterior distributions of model parameters and the model fusion results are calculated through the Bayesian update using the reversible jump Markov chain Monte Carlo method. A fatigue problem with naturally developed cracks is used to demonstrate the proposed method.


Author(s):  
Arnaud Dufays ◽  
Elysee Aristide Houndetoungan ◽  
Alain Coën

Abstract Change-point (CP) processes are one flexible approach to model long time series. We propose a method to uncover which model parameters truly vary when a CP is detected. Given a set of breakpoints, we use a penalized likelihood approach to select the best set of parameters that changes over time and we prove that the penalty function leads to a consistent selection of the true model. Estimation is carried out via the deterministic annealing expectation-maximization algorithm. Our method accounts for model selection uncertainty and associates a probability to all the possible time-varying parameter specifications. Monte Carlo simulations highlight that the method works well for many time series models including heteroskedastic processes. For a sample of fourteen hedge fund (HF) strategies, using an asset-based style pricing model, we shed light on the promising ability of our method to detect the time-varying dynamics of risk exposures as well as to forecast HF returns.


2020 ◽  
Vol 29 (12) ◽  
pp. 3605-3622
Author(s):  
Camille Maringe ◽  
Aurélien Belot ◽  
Bernard Rachet

Despite a large choice of models, functional forms and types of effects, the selection of excess hazard models for prediction of population cancer survival is not widespread in the literature. We propose multi-model inference based on excess hazard model(s) selected using Akaike information criteria or Bayesian information criteria for prediction and projection of cancer survival. We evaluate the properties of this approach using empirical data of patients diagnosed with breast, colon or lung cancer in 1990–2011. We artificially censor the data on 31 December 2010 and predict five-year survival for the 2010 and 2011 cohorts. We compare these predictions to the observed five-year cohort estimates of cancer survival and contrast them to predictions from an a priori selected simple model, and from the period approach. We illustrate the approach by replicating it for cohorts of patients for which stage at diagnosis and other important prognosis factors are available. We find that model-averaged predictions and projections of survival have close to minimal differences with the Pohar-Perme estimation of survival in many instances, particularly in subgroups of the population. Advantages of information-criterion based model selection include (i) transparent model-building strategy, (ii) accounting for model selection uncertainty, (iii) no a priori assumption for effects, and (iv) projections for patients outside of the sample.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Matthew D. Koslovsky ◽  
Marina Vannucci

Abstract Background Understanding the relation between the human microbiome and modulating factors, such as diet, may help researchers design intervention strategies that promote and maintain healthy microbial communities. Numerous analytical tools are available to help identify these relations, oftentimes via automated variable selection methods. However, available tools frequently ignore evolutionary relations among microbial taxa, potential relations between modulating factors, as well as model selection uncertainty. Results We present MicroBVS, an R package for Dirichlet-tree multinomial models with Bayesian variable selection, for the identification of covariates associated with microbial taxa abundance data. The underlying Bayesian model accommodates phylogenetic structure in the abundance data and various parameterizations of covariates’ prior probabilities of inclusion. Conclusion While developed to study the human microbiome, our software can be employed in various research applications, where the aim is to generate insights into the relations between a set of covariates and compositional data with or without a known tree-like structure.


2017 ◽  
Vol 13 (2) ◽  
pp. 203-260 ◽  
Author(s):  
Danielle Barth ◽  
Vsevolod Kapatsinski

AbstractThe present paper presents a multimodel inference approach to linguistic variation, expanding on prior work by Kuperman and Bresnan (2012). We argue that corpus data often present the analyst with high model selection uncertainty. This uncertainty is inevitable given that language is highly redundant: every feature is predictable from multiple other features. However, uncertainty involved in model selection is ignored by the standard method of selecting the single best model and inferring the effects of the predictors under the assumption that the best model is true. Multimodel inference avoids committing to a single model. Rather, we make predictions based on the entire set of plausible models, with contributions of models weighted by the models' predictive value. We argue that multimodel inference is superior to model selection for both the I-Language goal of inferring the mental grammars that generated the corpus, and the E-Language goal of predicting characteristics of future speech samples from the community represented by the corpus. Applying multimodel inference to the classic problem of English auxiliary contraction, we show that the choice between multimodel inference and model selection matters in practice: the best model may contain predictors that are not significant when the full set of plausible models is considered, and may omit predictors that are significant considering the full set of models. We also contribute to the study of English auxiliary contraction. We document the effects of priming, contextual predictability, and specific syntactic constructions and provide evidence against effects of phonological context.


2016 ◽  
Vol 24 (2) ◽  
pp. 230-245 ◽  
Author(s):  
Gitta H. Lubke ◽  
Ian Campbell ◽  
Dan McArtor ◽  
Patrick Miller ◽  
Justin Luningham ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document