scholarly journals COSMOLOGICAL MODEL SELECTION

2008 ◽  
Vol 23 (06) ◽  
pp. 787-802 ◽  
Author(s):  
PIA MUKHERJEE ◽  
DAVID PARKINSON

We give an overview of the recent progress in the field of cosmological model selection. Model selection statistics, such as those based on information theory and on Bayesian statistics are introduced and discussed. In the Bayesian framework, the marginalised model likelihood, or evidence, is the primary model selection statistic. We describe different methods of computing the evidence, and focus in particular on Nested Sampling. We describe the results of applying model selection methods to new cosmological data such as the CMB measurements by WMAP.

2021 ◽  
Author(s):  
Carlos R Oliveira ◽  
Eugene D Shapiro ◽  
Daniel M Weinberger

Vaccine effectiveness (VE) studies are often conducted after the introduction of new vaccines to ensure they provide protection in real-world settings. Although susceptible to confounding, the test-negative case-control study design is the most efficient method to assess VE post-licensure. Control of confounding is often needed during the analyses, which is most efficiently done through multivariable modeling. When a large number of potential confounders are being considered, it can be challenging to know which variables need to be included in the final model. This paper highlights the importance of considering model uncertainty by re-analyzing a Lyme VE study using several confounder selection methods. We propose an intuitive Bayesian Model Averaging (BMA) framework for this task and compare the performance of BMA to that of traditional single-best-model-selection methods. We demonstrate how BMA can be advantageous in situations when there is uncertainty about model selection by systematically considering alternative models and increasing transparency.


2004 ◽  
Vol 5 (2) ◽  
pp. 229-241 ◽  
Author(s):  
James P. Hoffmann ◽  
Christopher D. Ellingwood ◽  
Osei M. Bonsu ◽  
Daniel E. Bentil

Entropy ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. 561
Author(s):  
Miki Aoyagi

In recent years, selecting appropriate learning models has become more important with the increased need to analyze learning systems, and many model selection methods have been developed. The learning coefficient in Bayesian estimation, which serves to measure the learning efficiency in singular learning models, has an important role in several information criteria. The learning coefficient in regular models is known as the dimension of the parameter space over two, while that in singular models is smaller and varies in learning models. The learning coefficient is known mathematically as the log canonical threshold. In this paper, we provide a new rational blowing-up method for obtaining these coefficients. In the application to Vandermonde matrix-type singularities, we show the efficiency of such methods.


2020 ◽  
Vol 69 (6) ◽  
pp. 1163-1179 ◽  
Author(s):  
Kris V Parag ◽  
Christl A Donnelly

Abstract Estimating temporal changes in a target population from phylogenetic or count data is an important problem in ecology and epidemiology. Reliable estimates can provide key insights into the climatic and biological drivers influencing the diversity or structure of that population and evidence hypotheses concerning its future growth or decline. In infectious disease applications, the individuals infected across an epidemic form the target population. The renewal model estimates the effective reproduction number, R, of the epidemic from counts of observed incident cases. The skyline model infers the effective population size, N, underlying a phylogeny of sequences sampled from that epidemic. Practically, R measures ongoing epidemic growth while N informs on historical caseload. While both models solve distinct problems, the reliability of their estimates depends on p-dimensional piecewise-constant functions. If p is misspecified, the model might underfit significant changes or overfit noise and promote a spurious understanding of the epidemic, which might misguide intervention policies or misinform forecasts. Surprisingly, no transparent yet principled approach for optimizing p exists. Usually, p is heuristically set, or obscurely controlled via complex algorithms. We present a computable and interpretable p-selection method based on the minimum description length (MDL) formalism of information theory. Unlike many standard model selection techniques, MDL accounts for the additional statistical complexity induced by how parameters interact. As a result, our method optimizes p so that R and N estimates properly and meaningfully adapt to available data. It also outperforms comparable Akaike and Bayesian information criteria on several classification problems, given minimal knowledge of the parameter space, and exposes statistical similarities among renewal, skyline, and other models in biology. Rigorous and interpretable model selection is necessary if trustworthy and justifiable conclusions are to be drawn from piecewise models. [Coalescent processes; epidemiology; information theory; model selection; phylodynamics; renewal models; skyline plots]


2017 ◽  
Vol 22 (2) ◽  
pp. 361-381 ◽  
Author(s):  
Zhao-Hua Lu ◽  
Sy-Miin Chow ◽  
Eric Loken

Sign in / Sign up

Export Citation Format

Share Document