Bayesian Discovery Sampling in Financial Auditing: A Hierarchical Prior Model for Substantive Test Sample Sizes

Author(s):  
Paul C. Van Batenburg ◽  
Anthony O'Hagan ◽  
Ruud H. Veenstra
2015 ◽  
Vol 8 (2) ◽  
pp. 343-354 ◽  
Author(s):  
Zhigang Wei ◽  
Robert Rebandt ◽  
Michael Start ◽  
Litang Gao ◽  
Jason Hamilton ◽  
...  

1999 ◽  
Vol 26 (1) ◽  
pp. 39-44 ◽  
Author(s):  
T. B. Whitaker ◽  
F. G. Giesbrecht ◽  
W. M. Hagler

Abstract Loose shelled kernels (LSK) are a defined grade component of farmers stock peanuts and represented, on the average, 33.3% of the total aflatoxin mass and 7.7% of the kernel mass among the 120 farmers stock peanut lots studied. The functional relationship between aflatoxin in LSK taken from 2-kg test samples and the aflatoxin in farmers stock peanut lots was determined to be linear with zero intercept and a slope of 0.297. The correlation between aflatoxin in LSK and aflatoxin in the lot was 0.844 which suggests that LSK taken from large test samples can be used to estimate the aflatoxin concentration in a farmer's lot. Using only LSK allows large test samples to be used to estimate the lot concentration since LSK can be easily screened from a large test sample. If LSK accounts for 7.7% of the lot kernel mass, a 50-kg sample will yield about 3.9 kg of LSK which can be easily prepared for aflatoxin analysis. Increasing the test sample size from 2 to 50 kg reduced the coefficient of variation associated with estimating a lot with 100 parts per billion (ppb) aflatoxin from 114 to 23%, respectively. As an example, a farmers stock aflatoxin sampling plan with dual tolerances (10 and 100 ppb) that classified lots into three categories was evaluated for two test sample sizes (2 and 50 kg). The effect of increasing test sample size from 2 to 50 kg on the number of lots classified into each of the three categories was demonstrated when measuring aflatoxin only in LSK.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 977 ◽  
Author(s):  
Li Wang ◽  
Ali Mohammad-Djafari ◽  
Nicolas Gac ◽  
Mircea Dumitru

In this paper, a hierarchical prior model based on the Haar transformation and an appropriate Bayesian computational method for X-ray CT reconstruction are presented. Given the piece-wise continuous property of the object, a multilevel Haar transformation is used to associate a sparse representation for the object. The sparse structure is enforced via a generalized Student-t distribution ( S t g ), expressed as the marginal of a normal-inverse Gamma distribution. The proposed model and corresponding algorithm are designed to adapt to specific 3D data sizes and to be used in both medical and industrial Non-Destructive Testing (NDT) applications. In the proposed Bayesian method, a hierarchical structured prior model is proposed, and the parameters are iteratively estimated. The initialization of the iterative algorithm uses the parameters of the prior distributions. A novel strategy for the initialization is presented and proven experimentally. We compare the proposed method with two state-of-the-art approaches, showing that our method has better reconstruction performance when fewer projections are considered and when projections are acquired from limited angles.


2021 ◽  
Author(s):  
Philipp Dechent ◽  
Samuel Greenbank ◽  
Felix Hildenbrand ◽  
Saad Jbabdi ◽  
Dirk Uwe Sauer ◽  
...  

2017 ◽  
Author(s):  
Benjamin O. Turner ◽  
Erick J. Paul ◽  
Michael B. Miller ◽  
Aron K. Barbey

Despite a growing body of research suggesting that task-based functional magnetic resonance imaging (fMRI) studies often suffer from a lack of statistical power due to too-small samples, the proliferation of such underpowered studies continues unabated. Using large independent samples across eleven distinct tasks, we demonstrate the impact of sample size on replicability, assessed at different levels of analysis relevant to fMRI researchers. We find that the degree of replicability for typical sample sizes is modest and that sample sizes much larger than typical (e.g., N = 100) produce results that fall well short of perfectly replicable. Thus, our results join the existing line of work advocating for larger sample sizes. Moreover, because we test sample sizes over a fairly large range and use intuitive metrics of replicability, our hope is that our results are more understandable and convincing to researchers who may have found previous results advocating for larger samples inaccessible.


2000 ◽  
Vol 177 (S39) ◽  
pp. s21-s27 ◽  
Author(s):  
Bob van Wijngaarden ◽  
Aart H. Schene ◽  
Maarten Koeter ◽  
José Luis Vázquez-Barquero ◽  
Helle Charlotte Knudsen ◽  
...  

BackgroundIn international research on the consequences of psychiatric illnesses for relatives of patients, the need for an internationally standardised measure has been identified.AimsTo test the internal consistency and the test-retest reliability of the Involvement Evaluation Questionnaire (IEQ) in five European countries.MethodThe IEQ was administered twice to a sample of relatives or friends of patients with an ICD-10 diagnosis of schizophrenia. Reliability was tested using Cronbach's α, intraclass correlation coefficients and standard error of measurement. Reliability estimates were tested between sites.ResultsTest sample sizes ranged from 30 to 90 across sites, and retest sample sizes ranged from 21 to 77. Cronbach's α values of IEQ sub-scales and sumscore were substantial at most sites; but at two, α values were moderate. Intraclass correlation coefficients were substantial to high at all sites. The standard errors of measurement differed across sites, indicating differences in performance.ConclusionThe reliability of the IEQ in five languages varies across sites, but is sufficiently high in at least four out of five.


Sign in / Sign up

Export Citation Format

Share Document