scholarly journals The multiple testing problem for Box-Pierce statistics

2014 ◽  
Vol 8 (1) ◽  
pp. 497-522 ◽  
Author(s):  
Tucker McElroy ◽  
Brian Monsell
2009 ◽  
Vol 4 (3) ◽  
pp. 291-293 ◽  
Author(s):  
Thomas E. Nichols ◽  
Jean-Baptist Poline

The article “Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition” ( Vul, Harris, Winkielman, & Pashler, 2009 , this issue) makes a broad case that current practice in neuroimaging methodology is deficient. Vul et al. go so far as to demand that authors retract or restate results, which we find wrongly casts suspicion on the confirmatory inference methods that form the foundation of neuroimaging statistics. We contend the authors' argument is overstated and that their work can be distilled down to two points already familiar to the neuroimaging community: that the multiple testing problem must be accounted for, and that reporting of methods and results should be improved. We also illuminate their concerns with standard statistical concepts such as the distinction between estimation and inference and between confirmatory and post hoc inferences, which makes their findings less puzzling.


2019 ◽  
Author(s):  
David C. Handler ◽  
Paul A. Haynes

AbstractThe multiple testing problem is a well-known statistical stumbling block in high-throughput data analysis, where large scale repetition of statistical methods introduces unwanted noise into the results. While approaches exist to overcome the multiple testing problem, these methods focus on theoretical statistical clarification rather than incorporating experimentally-derived measures to ensure appropriately tailored analysis parameters. Here, we introduce a method for estimating inter-replicate variability in reference samples for a quantitative proteomics experiment using permutation analysis. This can function as a modulator to multiple testing corrections such as the Benjamini-Hochberg ordered Q value test. We refer to this as a ‘same-same’ analysis, since this method incorporates the use of six biological replicates of the reference sample and determines, through non-redundant triplet pairwise comparisons, the level of quantitative noise inherent within the system. The method can be used to produce an experiment-specific Q value cut-off that achieves a specified false discovery rate at the quantitation level, such as 1%. The same-same method is applicable to any experimental set that incorporates six replicates of a reference sample. To facilitate access to this approach, we have developed a same-same analysis R module that is freely available and ready to use via the internet.


Sign in / Sign up

Export Citation Format

Share Document