Bayesian methods for the analysis of small sample multilevel data with a complex variance structure.

2013 ◽  
Vol 18 (2) ◽  
pp. 151-164 ◽  
Author(s):  
Scott A. Baldwin ◽  
Gilbert W. Fellingham
2010 ◽  
Vol 3 (1) ◽  
pp. 176-207 ◽  
Author(s):  
Chueh An Hsieh ◽  
Alexander Von Eye

The usefulness of Bayesian methods in estimating complex statistical models is undeniable. From a Bayesian standpoint, this paper aims to demonstrate the capacity of Bayesian methods and propose a comprehensive model combining both a measurement model (e.g., an item response model, IRM) and a structural model (e.g., a latent variable model, LVM). That is, through the incorporation of the probit link and Bayesian estimation, the item response model can be introduced naturally into a latent variable model. The utility of this IRM-LVM comprehensive framework is investigated with a real data example and promising results are obtained, in which the data drawn from part of the British Social Attitudes Panel Survey 1983-1986 reveal the attitude toward abortion of a representative sample of adults aged 18 or older living in Great Britain. The application of IRMs to responses gathered from repeated assessments allows us to take the characteristics of both item responses and measurement error into consideration in the analysis of individual developmental trajectories, and helps resolve some difficult modeling issues commonly encountered in developmental research, such as small sample sizes, multiple discretely scaled items, many repeated assessments, and attrition over time.


2021 ◽  
Vol 11 ◽  
Author(s):  
Prathiba Natesan Batley ◽  
Ratna Nandakumar ◽  
Jayme M. Palka ◽  
Pragya Shrestha

Recently, there has been an increased interest in developing statistical methodologies for analyzing single case experimental design (SCED) data to supplement visual analysis. Some of these are simulation-driven such as Bayesian methods because Bayesian methods can compensate for small sample sizes, which is a main challenge of SCEDs. Two simulation-driven approaches: Bayesian unknown change-point model (BUCP) and simulation modeling analysis (SMA) were compared in the present study for three real datasets that exhibit “clear” immediacy, “unclear” immediacy, and delayed effects. Although SMA estimates can be used to answer some aspects of functional relationship between the independent and the outcome variables, they cannot address immediacy or provide an effect size estimate that considers autocorrelation as required by the What Works Clearinghouse (WWC) Standards. BUCP overcomes these drawbacks of SMA. In final analysis, it is recommended that both visual and statistical analyses be conducted for a thorough analysis of SCEDs.


2018 ◽  
Vol 10 (10) ◽  
pp. 3671
Author(s):  
Jongseon Jeon ◽  
Suneung Ahn

The work proposed a reliability demonstration test (RDT) process, which can be employed to determine whether a finite population is accepted or rejected. Bayesian and non-Bayesian approaches were compared in the proposed RDT process, as were lot and sequential sampling. One-shot devices, such as bullets, fire extinguishers, and grenades, were used as test targets, with their functioning state expressible as a binary model. A hypergeometric distribution was adopted as the likelihood function for a finite population consisting of binary items. It was demonstrated that a beta-binomial distribution was the conjugate prior of the hypergeometric likelihood function. According to the Bayesian approach, the posterior beta-binomial distribution is used to decide on the acceptance or rejection of the population in the RDT. The proposed method in this work could be used to select item providers in a supply chain, who guarantee a predetermined reliability target and confidence level. Numerical examples show that a Bayesian approach with sequential sampling has the advantage of only requiring a small sample size to determine the acceptance of a finite population.


2009 ◽  
Vol 2009 ◽  
pp. 1-10 ◽  
Author(s):  
Todd L. Graves ◽  
Michael S. Hamada

Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by integrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty. Both of these goals are facilitated by modern Bayesian computational methods. We demonstrate these tools using examples that were previously solvable only through the use of ingenious approximations, and employ genetic algorithms to guide resource allocation.


2020 ◽  
pp. 001316442094280
Author(s):  
Roy Levy ◽  
Yan Xia ◽  
Samuel B. Green

A number of psychometricians have suggested that parallel analysis (PA) tends to yield more accurate results in determining the number of factors in comparison with other statistical methods. Nevertheless, all too often PA can suggest an incorrect number of factors, particularly in statistically unfavorable conditions (e.g., small sample sizes and low factor loadings). Because of this, researchers have recommended using multiple methods to make judgments about the number of factors to extract. Implicit in this recommendation is that, when the number of factors is chosen based on PA, uncertainty nevertheless exists. We propose a Bayesian parallel analysis (B-PA) method to incorporate the uncertainty with decisions about the number of factors. B-PA yields a probability distribution for the various possible numbers of factors. We implement and compare B-PA with a frequentist approach, revised parallel analysis (R-PA), in the contexts of real and simulated data. Results show that B-PA provides relevant information regarding the uncertainty in determining the number of factors, particularly under conditions with small sample sizes, low factor loadings, and less distinguishable factors. Even if the indicated number of factors with the highest probability is incorrect, B-PA can show a sizable probability of retaining the correct number of factors. Interestingly, when the mode of the distribution of the probabilities associated with different numbers of factors was treated as the number of factors to retain, B-PA was somewhat more accurate than R-PA in a majority of the conditions.


2007 ◽  
Vol 31 (4) ◽  
pp. 374-383 ◽  
Author(s):  
Zhiyong Zhang ◽  
Fumiaki Hamagami ◽  
Lijuan Lijuan Wang ◽  
John R. Nesselroade ◽  
Kevin J. Grimm

Bayesian methods for analyzing longitudinal data in social and behavioral research are recommended for their ability to incorporate prior information in estimating simple and complex models. We first summarize the basics of Bayesian methods before presenting an empirical example in which we fit a latent basis growth curve model to achievement data from the National Longitudinal Survey of Youth. This step-by-step example illustrates how to analyze data using both noninformative and informative priors. The results show that in addition to being an alternative to the maximum likelihood estimation (MLE) method, Bayesian methods also have unique strengths, such as the systematic incorporation of prior information from previous studies. These methods are more plausible ways to analyze small sample data compared with the MLE method.


2018 ◽  
Author(s):  
Donald Ray Williams ◽  
Philippe Rast ◽  
Paul - Christian Bürkner

Developing meta-analytic methods is an important goal for psychological science. When there are few studies in particular, commonly used methods have several limitations, most notably of which is underestimating between-study variability. Although Bayesian methods are often recommended for small sample situations, their performance has not been thoroughly examined in the context of meta-analysis. Here, we characterize and apply weakly informativepriors for estimating meta-analytic models and demonstrate with extensive simulations that fully Bayesian methods overcome boundary estimates of exactly zero between-study variance, better maintain error rates, and have lower frequentist risk according toKullback-Leibler divergence. While our results show that combining evidence with few studiesis non-trivial, we argue that this is an important goal that deserves further considerationin psychology. Further, we suggest that frequentist properties can provide importantinformation for Bayesian modeling. We conclude with meta-analytic guidelines for appliedresearchers that can be implemented with the provided computer code.


Author(s):  
Conly L. Rieder ◽  
S. Bowser ◽  
R. Nowogrodzki ◽  
K. Ross ◽  
G. Sluder

Eggs have long been a favorite material for studying the mechanism of karyokinesis in-vivo and in-vitro. They can be obtained in great numbers and, when fertilized, divide synchronously over many cell cycles. However, they are not considered to be a practical system for ultrastructural studies on the mitotic apparatus (MA) for several reasons, the most obvious of which is that sectioning them is a formidable task: over 1000 ultra-thin sections need to be cut from a single 80-100 μm diameter egg and of these sections only a small percentage will contain the area or structure of interest. Thus it is difficult and time consuming to obtain reliable ultrastructural data concerning the MA of eggs; and when it is obtained it is necessarily based on a small sample size.We have recently developed a procedure which will facilitate many studies concerned with the ultrastructure of the MA in eggs. It is based on the availability of biological HVEM's and on the observation that 0.25 μm thick serial sections can be screened at high resolution for content (after mounting on slot grids and staining with uranyl and lead) by phase contrast light microscopy (LM; Figs 1-2).


Sign in / Sign up

Export Citation Format

Share Document