scholarly journals Implementation of Bayesian Multiple Comparison Correction in the Second-Level Analysis of fMRI Data: With Pilot Analyses of Simulation and Real fMRI Datasets Based on Voxelwise Inference

2019 ◽  
Author(s):  
Hyemin Han

AbstractWe developed and tested Bayesian multiple comparison correction method for Bayesian voxelwise second-level fMRI analysis with R. The performance of the developed method was tested with simulation and real image datasets. First, we compared false alarm and hit rates, which were used as proxies for selectivity and sensitivity, respectively, between Bayesian and classical inference were conducted. For the comparison, we created simulated images, added noise to the created images, and analyzed the noise-added images while applying Bayesian and classical multiple comparison correction methods. Second, we analyzed five real image datasets to examine how our Bayesian method worked in realistic settings. When the performance assessment was conducted, Bayesian correction method demonstrated good sensitivity (hit rate ≥ 75%) and acceptable selectivity (false alarm rate < 10%) when N ≤ 8. Furthermore, Bayesian correction method showed better sensitivity compared with classical correction method while maintaining the aforementioned acceptable selectivity.

2020 ◽  
Author(s):  
Hyemin Han

AbstractBayesFactorFMRI is a tool developed with R and Python to allow neuroimaging researchers to conduct Bayesian second-level analysis and Bayesian meta-analysis of fMRI image data with multiprocessing. This tool expedites computationally intensive Bayesian fMRI analysis through multiprocessing. Its GUI allows researchers who are not experts in computer programming to feasibly perform Bayesian fMRI analysis. BayesFactorFMRI is available via Zenodo and GitHub for download. It would be widely reused by neuroimaging researchers who intend to analyse their fMRI data with Bayesian analysis with better sensitivity compared with classical analysis while improving performance by distributing analysis tasks into multiple processors.


2019 ◽  
Vol 9 (8) ◽  
pp. 198 ◽  
Author(s):  
Hyemin Han ◽  
Andrea L. Glenn ◽  
Kelsie J. Dawson

A significant challenge for fMRI research is statistically controlling for false positives without omitting true effects. Although a number of traditional methods for multiple comparison correction exist, several alternative tools have been developed that do not rely on strict parametric assumptions, but instead implement alternative methods to correct for multiple comparisons. In this study, we evaluated three of these methods, Statistical non-Parametric Mapping (SnPM), 3DClustSim, and Threshold Free Cluster Enhancement (TFCE), by examining which method produced the most consistent outcomes even when spatially-autocorrelated noise was added to the original images. We assessed the false alarm rate and hit rate of each method after noise was applied to the original images.


2019 ◽  
Author(s):  
Hyemin Han ◽  
Andrea Glenn ◽  
Kelsie J Dawson

A significant challenge for fMRI research is statistically controlling for false positives without omitting true effects. Although a number of traditional methods for multiple comparison correction exist, several alternative tools have been developed that do not rely on strict parametric assumptions, but instead implement alternative methods to correct for multiple comparisons. In this study, we evaluated three of these methods, Statistical non-Parametric Mapping (SnPM), 3DClustSim, and Threshold Free Cluster Enhancement (TFCE), by examining which method produced the most consistent outcomes even when spatially-autocorrelated noise was added to the original images. We assessed the false alarm rate and hit rate of each method after noise was applied to the original images.


2015 ◽  
Vol 53 (10) ◽  
pp. 1011-1023 ◽  
Author(s):  
Joan Francesc Alonso ◽  
Sergio Romero ◽  
Miguel Ángel Mañanas ◽  
Mónica Rojas ◽  
Jordi Riba ◽  
...  

2017 ◽  
Author(s):  
Xiao Chen ◽  
Bin Lu ◽  
Chao-Gan Yan

ABSTRACTConcerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability / replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 (40 per group)) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect “true” effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility.


2000 ◽  
Vol 39 (02) ◽  
pp. 105-109 ◽  
Author(s):  
F. Lanni ◽  
T. Kanade ◽  
F. Kagalwala

Abstract:Differential Interference Contrast (DIC) microscopy is a powerful visualization tool used to study live biological cells. Its use, however, has been limited to qualitative observations. The inherent non-linear relation between the object properties and the image intensity makes quantitative analysis difficult. As a first step towards measuring optical properties of objects from DIC images, we develop a model for the image formation process using methods consistent with energy conservation laws. We verify our model by comparing real image data of manufactured specimens to simulated images of virtual objects. As the next step, we plan to use this model to reconstruct the three-dimensional properties of unknown specimens.


2018 ◽  
Author(s):  
Xiaoying Pu ◽  
Matthew Kay

Tukey emphasized decades ago that taking exploratory findings as confirmatory is “destructively foolish”. We reframe recent conversations about the reliability of results from exploratory visual analytics—such as the multiple comparisons problem—in terms of Gelman and Loken’s garden of forking paths to lay out a design space for addressing the forking paths problem in visual analytics. This design space encompasses existing approaches to address the forking paths problem (multiple comparison correction) as well as solutions that have not been applied to exploratory visual analytics (regularization). We also discuss how perceptual bias correction techniques may be used to correct biases induced in analysts’ understanding of their data due to the forking paths problem, and outline how this problem can be cast as a threat to validity within Munzner’s Nested Model of visualization design. Finally, we suggest paper review guidelines to encourage reviewers to consider the forking paths problem when evaluating future designs of visual analytics tools.


Sign in / Sign up

Export Citation Format

Share Document