prior specification
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 4)

H-INDEX

9
(FIVE YEARS 0)

Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 307
Author(s):  
Francisco Louzada ◽  
Diego Carvalho do Nascimento ◽  
Osafu Augustine Egbon

Spatial documentation is exponentially increasing given the availability of Big Data in the Internet of Things, enabled by device miniaturization and data storage capacity. Bayesian spatial statistics is a useful statistical tool to determine the dependence structure and hidden patterns in space through prior knowledge and data likelihood. However, this class of modeling is not yet well explored when compared to adopting classification and regression in machine-learning models, in which the assumption of the spatiotemporal independence of the data is often made, that is an inexistent or very weak dependence. Thus, this systematic review aims to address the main models presented in the literature over the past 20 years, identifying the gaps and research opportunities. Elements such as random fields, spatial domains, prior specification, the covariance function, and numerical approximations are discussed. This work explores the two subclasses of spatial smoothing: global and local.


2021 ◽  
pp. 153450842110402
Author(s):  
Benjamin G. Solomon ◽  
Ole J. Forsberg ◽  
Monelle Thomas ◽  
Brittney Penna ◽  
Katherine M. Weisheit

Bayesian regression has emerged as a viable alternative for the estimation of curriculum-based measurement (CBM) growth slopes. Preliminary findings suggest such methods may yield improved efficiency relative to other linear estimators and can be embedded into data management programs for high-frequency use. However, additional research is needed, as Bayesian estimators require multiple specifications of the prior distributions. The current study evaluates the accuracy of several combinations of prior values, including three distributions of the residuals, two values of the expected growth rate, and three possible values for the precision of slope when using Bayesian simple linear regression to estimate fluency growth slopes for reading CBM. We also included traditional ordinary least squares (OLS) as a baseline contrast. Findings suggest that the prior specification for the residual distribution had, on average, a trivial effect on the accuracy of the slope. However, specifications for growth rate and precision of slope were influential, and virtually all variants of Bayesian regression evaluated were superior to OLS. Converging evidence from both simulated and observed data now suggests Bayesian methods outperform OLS for estimating CBM growth slopes and should be strongly considered in research and practice.


2021 ◽  
Vol 11 ◽  
Author(s):  
Steffen Zitzmann ◽  
Christoph Helm ◽  
Martin Hecht

Bayesian approaches for estimating multilevel latent variable models can be beneficial in small samples. Prior distributions can be used to overcome small sample problems, for example, when priors that increase the accuracy of estimation are chosen. This article discusses two different but not mutually exclusive approaches for specifying priors. Both approaches aim at stabilizing estimators in such a way that the Mean Squared Error (MSE) of the estimator of the between-group slope will be small. In the first approach, the MSE is decreased by specifying a slightly informative prior for the group-level variance of the predictor variable, whereas in the second approach, the decrease is achieved directly by using a slightly informative prior for the slope. Mathematical and graphical inspections suggest that both approaches can be effective for reducing the MSE in small samples, thus rendering them attractive in these situations. The article also discusses how these approaches can be implemented in Mplus.


2020 ◽  
Author(s):  
Jeff S. Wesner ◽  
Justin F.P. Pomeranz

Bayesian data analysis is increasingly used in ecology, but prior specification remains focused on choosing non-informative priors (e.g., flat or vague priors). One barrier to choosing more informative priors is that priors must be specified on model parameters (e.g., intercepts, slopes, sigmas), but prior knowledge often exists on the level of the response variable. This is particularly true for common models in ecology, like generalized linear mixed models, which may have a link function and dozens of parameters, each of which needs a prior distribution. We suggest that this difficulty can be overcome by simulating from the prior predictive distribution and visualizing the results on the scale of the response variable. In doing so, some common choices for non-informative priors on parameters can easily be seen to produce biologically impossible values of response variables. Such implications of prior choices are difficult to foresee without visualization. We demonstrate a workflow for prior selection using simulation and visualization with two ecological examples (predator-prey body sizes and spider responses to food competition). This approach is not new, but its adoption by ecologists will help to better incorporate prior information in ecological models, thereby maximizing one of the benefits of Bayesian data analysis.


2020 ◽  
Author(s):  
Camiel van Zundert ◽  
Emma Somer ◽  
Milica Miocevic

Bayesian mediation analysis using the method of covariances requires specifying a prior for the covariance matrix of the independent variable, mediator, and outcome. Using a conjugate inverse-Wishart prior has been the norm, even though this choice assumes equal levels of informativeness for all elements in the covariance matrix. This paper describes separation strategy priors for the single mediator model, develops a Prior Predictive Check (PrPC) for inverse-Wishart and separation strategy priors, and implements the PrPC in a Shiny app. An empirical example illustrates the possibilities in the app. Guidelines are provided for selecting the optimal prior specification for the prior knowledge researchers wish to encode.


2020 ◽  
Author(s):  
James Ohisei Uanhoro

The goal of this paper is to frame structural equation models (SEMs) as Bayesian multilevel regression models. Framing SEMs as Bayesian regression models provides an alternative approach to understanding SEMs that can improve model transparency and enhance innovation during modeling. For demonstration, we analyze six indicators of living standards data from 101 countries. The data are proportions and we develop confirmatory factor analysis as regression while accommodating the fact that the data are proportions. We also provide extensive guidance on prior specification, which is relevant for estimating complex regression models such as these. Finally, we run through regression equations for SEMs beyond the scope of the demonstration.


2020 ◽  
Vol 7 (1) ◽  
pp. 251-278 ◽  
Author(s):  
Jennifer Hill ◽  
Antonio Linero ◽  
Jared Murray

Bayesian additive regression trees (BART) provides a flexible approach to fitting a variety of regression models while avoiding strong parametric assumptions. The sum-of-trees model is embedded in a Bayesian inferential framework to support uncertainty quantification and provide a principled approach to regularization through prior specification. This article presents the basic approach and discusses further development of the original algorithm that supports a variety of data structures and assumptions. We describe augmentations of the prior specification to accommodate higher dimensional data and smoother functions. Recent theoretical developments provide justifications for the performance observed in simulations and other settings. Use of BART in causal inference provides an additional avenue for extensions and applications. We discuss software options as well as challenges and future directions.


Author(s):  
Qingyuan Zhao ◽  
Yang Chen ◽  
Dylan S Small

AbstractBackgroundOn January 23, 2020, a quarantine was imposed on travel in and out of Wuhan, where the 2019 novel coronavirus (2019-nCoV) outbreak originated from. Previous analyses estimated the basic epidemiological parameters using symptom onset dates of the confirmed cases in Wuhan and outside China.MethodsWe obtained information on the 46 coronavirus cases who traveled from Wuhan before January 23 and have been subsequently confirmed in Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan as of February 5, 2020. Most cases have detailed travel history and disease progress. Compared to previous analyses, an important distinction is that we used this data to informatively simulate the infection time of each case using the symptom onset time, previously reported incubation interval, and travel history. We then fitted a simple exponential growth model with adjustment for the January 23 travel ban to the distribution of the simulated infection time. We used a Bayesian analysis with diffuse priors to quantify the uncertainty of the estimated epidemiological parameters. We performed sensitivity analysis to different choices of incubation interval and the hyperparameters in the prior specification.ResultsWe found that our model provides good fit to the distribution of the infection time. Assuming the travel rate to the selected countries and regions is constant over the study period, we found that the epidemic was doubling in size every 2.9 days (95% credible interval [CrI], 2 days—4.1 days). Using previously reported serial interval for 2019-nCoV, the estimated basic reproduction number is 5.7 (95% CrI, 3.4—9.2). The estimates did not change substantially if we assumed the travel rate doubled in the last 3 days before January 23, when we used previously reported incubation interval for severe acute respiratory syndrome (SARS), or when we changed the hyperparameters in our prior specification.ConclusionsOur estimated epidemiological parameters are higher than an earlier report using confirmed cases in Wuhan. This indicates the 2019-nCoV could have been spreading faster than previous estimates.


2019 ◽  
Author(s):  
Maxwell Hong ◽  
Ross Jacobucci ◽  
Gitta Lubke

Data mining methods offer a powerful tool for psychologists to capture complex relations such as interaction and nonlinear effects without prior specification. However, interpreting and integrating information from data mining models can be challenging. The current research proposes a strategy to identify nonlinear and interaction effects by using a deductive data mining approach that in essence consists of comparing increasingly complex data mining models. The proposed approach is applied to three empirical data sets with details on how to interpret each step and model comparison, along with simulations providing a proof of concept. Annotated example code is also provided. Ultimately, the proposed deductive data mining approach provides a novel perspective on exploring interactions and nonlinear effects with the goal of model explanation and confirmation. Limitations of the current approach and future directions are also considered.


Sign in / Sign up

Export Citation Format

Share Document