Bayesian Approaches to the Determination of Sample Sizes for Binomial and Multinomial Sampling-Some Comments on the Paper by Pham-Gia and Turkkan

Author(s):  
C. J. Adcock
1988 ◽  
Vol 71 (1) ◽  
pp. 41-43
Author(s):  
Octave J Francis ◽  
George M Ware ◽  
Allen S Carman ◽  
Gary P Kirschenheuter ◽  
Shia S Kuan

Abstract Data were gathered, during a study on the development of an automated system for the extraction, cleanup, and quantitation of mycotoxins in corn, to determine if it was scientifically sound to reduce the analytical sample size. Five, 10, and 25 g test portions were analyzed and statistically compared with 50 g test portions of the same composites for aflatoxin concentration variance. Statistical tests used to determine whether the 10 and 50 g sample sizes differed significantly showed a satisfactory observed variance ratio (Fobs) of 2.03 for computations of pooled standard deviations; paired f-test values of 0.952, 1.43, and 0.224 were computed for each of the 3 study samples. The results meet acceptable limits, since each sample’s r-test result is less than the published value of the |t|, which is 1.6909 for the test conditions. The null hypothesis is retained since the sample sizes do not give significantly different values for the mean analyte concentration. The percent coefficients of variation (CVs) for all samples tested were within the expected range. In addition, the variance due to sample mixing was evaluated using radioisotopelabeled materials, yielding an acceptable CV of 22.2%. The variance due to the assay procedure was also evaluated and showed an aflatoxin B, recovery of 78.9% and a CV of 11.4%. Results support the original premise that a sufficiently ground and blended sample would produce an analyte variance for a 10 g sample that was statistically comparable with that for a 50 g sample.


2016 ◽  
Vol 19 (4) ◽  
pp. 426-432 ◽  
Author(s):  
Clive Roland Boddy

Purpose Qualitative researchers have been criticised for not justifying sample size decisions in their research. This short paper addresses the issue of which sample sizes are appropriate and valid within different approaches to qualitative research. Design/methodology/approach The sparse literature on sample sizes in qualitative research is reviewed and discussed. This examination is informed by the personal experience of the author in terms of assessing, as an editor, reviewer comments as they relate to sample size in qualitative research. Also, the discussion is informed by the author’s own experience of undertaking commercial and academic qualitative research over the last 31 years. Findings In qualitative research, the determination of sample size is contextual and partially dependent upon the scientific paradigm under which investigation is taking place. For example, qualitative research which is oriented towards positivism, will require larger samples than in-depth qualitative research does, so that a representative picture of the whole population under review can be gained. Nonetheless, the paper also concludes that sample sizes involving one single case can be highly informative and meaningful as demonstrated in examples from management and medical research. Unique examples of research using a single sample or case but involving new areas or findings that are potentially highly relevant, can be worthy of publication. Theoretical saturation can also be useful as a guide in designing qualitative research, with practical research illustrating that samples of 12 may be cases where data saturation occurs among a relatively homogeneous population. Practical implications Sample sizes as low as one can be justified. Researchers and reviewers may find the discussion in this paper to be a useful guide to determining and critiquing sample size in qualitative research. Originality/value Sample size in qualitative research is always mentioned by reviewers of qualitative papers but discussion tends to be simplistic and relatively uninformed. The current paper draws attention to how sample sizes, at both ends of the size continuum, can be justified by researchers. This will also aid reviewers in their making of comments about the appropriateness of sample sizes in qualitative research.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Mi Tian ◽  
Xiaotao Sheng

Applying random field theory involves two important issues: the statistical homogeneity (or stationarity) and determination of random field parameters and correlation function. However, the profiles of soil properties are typically assumed to be statistically homogeneous or stationary without rigorous statistical verification. It is also a challenging task to simultaneously determine random field parameters and the correlation function due to a limited amount of direct test data and various uncertainties (e.g., transformation uncertainties) arising during site investigation. This paper presents Bayesian approaches for probabilistic characterization of undrained shear strength using cone penetration test (CPT) data and prior information. Homogeneous soil units are first identified using CPT data and subsequently assessed for weak stationarity by the modified Bartlett test to reject the null hypothesis of stationarity. Then, Bayesian approaches are developed to determine the random field parameters and simultaneously select the most probable correlation function among a pool of candidate correlation functions within the identified statistically homogeneous layers. The proposed approaches are illustrated using CPT data at a clay site in Shanghai, China. It is shown that Bayesian approaches provide a rational tool for proper determination of random field model for probabilistic characterization of undrained shear strength with consideration of transformation uncertainty.


2015 ◽  
Vol 27 (1) ◽  
pp. 114-125 ◽  
Author(s):  
BC Tai ◽  
ZJ Chen ◽  
D Machin

In designing randomised clinical trials involving competing risks endpoints, it is important to consider competing events to ensure appropriate determination of sample size. We conduct a simulation study to compare sample sizes obtained from the cause-specific hazard and cumulative incidence (CMI) approaches, by first assuming exponential event times. As the proportional subdistribution hazard assumption does not hold for the CMI exponential (CMIExponential) model, we further investigate the impact of violation of such an assumption by comparing the results obtained from the CMI exponential model with those of a CMI model assuming a Gompertz distribution (CMIGompertz) where the proportional assumption is tenable. The simulation suggests that the CMIExponential approach requires a considerably larger sample size when treatment reduces the hazards of both the main event, A, and the competing risk, B. When treatment has a beneficial effect on A but no effect on B, the sample sizes required by both methods are largely similar, especially for large reduction in the main risk. If treatment has a protective effect on A but adversely affects B, then the sample size required by CMIExponential is notably smaller than cause-specific hazard for small to moderate reduction in the main risk. Further, a smaller sample size is required for CMIGompertz as compared with CMIExponential. The choice between a cause-specific hazard or CMI model in competing risks outcomes has implications on the study design. This should be made on the basis of the clinical question of interest and the validity of the associated model assumption.


Sign in / Sign up

Export Citation Format

Share Document