scholarly journals Pooling morphometric estimates: a statistical equivalence approach

Author(s):  
Heath R Pardoe ◽  
Gary Cutter ◽  
Rachel A Alter ◽  
Rebecca Kucharsky Hiess ◽  
Mira Semmelroch ◽  
...  

Changes in hardware or image processing settings are a common issue for large multi-center studies. In order to pool MRI data acquired under these changed conditions, it is necessary to demonstrate that the changes do not affect MRI-based measurements. In these circumstances classical inference testing is inappropriate because it is designed to detect differences, not prove similarity. We used a method known as statistical equivalence testing to address this limitation. Equivalence testing was carried out on three datasets: (i) cortical thickness and automated hippocampal volume estimates obtained from healthy individuals imaged using different multi-channel head coils; (ii) manual hippocampal volumetry obtained using two readers; and (iii) corpus callosum area estimates obtained using an automated method with manual cleanup carried out by two readers. Equivalence testing was carried out using the “two one-sided tests” (TOST) approach. Power analyses of the two one-sided tests were used to estimate sample sizes required for well-powered equivalence testing analyses. Mean and standard deviation estimates from the automated hippocampal volume dataset were used to carry out an example power analysis. Cortical thickness values were found to be equivalent over 61% of the cortex when different head coils were used (q < 0.05, FDR correction). Automated hippocampal volume estimates obtained using the same two coils were statistically equivalent (TOST p = 4.28 × 10-15). Manual hippocampal volume estimates obtained using two readers were not statistically equivalent (TOST p = 0.97). The use of different readers to carry out limited correction of automated corpus callosum segmentations yielded equivalent area estimates (TOST p = 1.28 × 10-14). Power analysis of simulated and automated hippocampal volume data demonstrated that the equivalence margin affects the number of subjects required for well-powered equivalence tests. We have presented a statistical method for determining if morphometric measures obtained under variable conditions can be pooled. The equivalence testing technique is applicable for analyses in which experimental conditions vary over the course of the study.

2015 ◽  
Author(s):  
Heath R Pardoe ◽  
Gary Cutter ◽  
Rachel A Alter ◽  
Rebecca Kucharsky Hiess ◽  
Mira Semmelroch ◽  
...  

Changes in hardware or image processing settings are a common issue for large multi-center studies. In order to pool MRI data acquired under these changed conditions, it is necessary to demonstrate that the changes do not affect MRI-based measurements. In these circumstances classical inference testing is inappropriate because it is designed to detect differences, not prove similarity. We used a method known as statistical equivalence testing to address this limitation. Equivalence testing was carried out on three datasets: (i) cortical thickness and automated hippocampal volume estimates obtained from healthy individuals imaged using different multi-channel head coils; (ii) manual hippocampal volumetry obtained using two readers; and (iii) corpus callosum area estimates obtained using an automated method with manual cleanup carried out by two readers. Equivalence testing was carried out using the “two one-sided tests” (TOST) approach. Power analyses of the two one-sided tests were used to estimate sample sizes required for well-powered equivalence testing analyses. Mean and standard deviation estimates from the automated hippocampal volume dataset were used to carry out an example power analysis. Cortical thickness values were found to be equivalent over 61% of the cortex when different head coils were used (q < 0.05, FDR correction). Automated hippocampal volume estimates obtained using the same two coils were statistically equivalent (TOST p = 4.28 × 10-15). Manual hippocampal volume estimates obtained using two readers were not statistically equivalent (TOST p = 0.97). The use of different readers to carry out limited correction of automated corpus callosum segmentations yielded equivalent area estimates (TOST p = 1.28 × 10-14). Power analysis of simulated and automated hippocampal volume data demonstrated that the equivalence margin affects the number of subjects required for well-powered equivalence tests. We have presented a statistical method for determining if morphometric measures obtained under variable conditions can be pooled. The equivalence testing technique is applicable for analyses in which experimental conditions vary over the course of the study.


2015 ◽  
Author(s):  
Heath R Pardoe ◽  
Gary Cutter ◽  
Rachel A Alter ◽  
Rebecca Kucharsky Hiess ◽  
Mira Semmelroch ◽  
...  

Changes in hardware or image processing settings are a common issue for large multi-center studies. In order to pool MRI data acquired under these changed conditions, it is necessary to demonstrate that the changes do not affect MRI-based measurements. In these circumstances classical inference testing is inappropriate because it is designed to detect differences, not prove similarity. We used a method known as statistical equivalence testing to address this limitation. Equivalence testing was carried out on three datasets: (i) cortical thickness and automated hippocampal volume estimates obtained from 16 healthy individuals imaged different multi-channel head coils; (ii) manual hippocampal volumetry obtained using two readers; and (iii) corpus callosum area estimates obtained using an automated method with manual cleanup carried out by two readers. Equivalence testing was carried out using the “two one-sided tests” approach. Cortical thickness values were found to be equivalent over 78% of the cortex when different head coils were used (p = 0.024). Automated hippocampal volume estimates obtained using the same two coils were statistically equivalent (p = 4.28 × 10-15). Manual hippocampal volume estimates obtained using two readers were not statistically equivalent (p = 0.97). The use of different readers to carry out limited correction of automated corpus callosum segmentations yielded equivalent area estimates (1.28 × 10-14). We have presented a statistical method for determining if morphometric measures obtained under variable conditions can be pooled. The equivalence testing technique is applicable for analyses in which experimental conditions vary over the course of the study.


2020 ◽  
Author(s):  
Anthony Schmidt

Intensive English programs (IEPs) exist as an additional pathway into higher education for international students who need additional language support before full matriculation. Despite their long history in higher education, there is little research on the effectiveness of these programs. The current research examines the effectiveness of an IEP by comparing IEP students to directly-admitted international students. Results from regression models on first-semester and first-year GPA indicated no significant differences between these two student groups. Follow-up equivalence testing indicated statistical equivalence in several cases. The findings lead to the conclusion that the IEP is effective in helping students perform on par with directly-admitted international students. These findings imply further support for IEPs and alterative pathways to direct admission.


Medicina ◽  
2020 ◽  
Vol 56 (10) ◽  
pp. 497
Author(s):  
Nauris Zdanovskis ◽  
Ardis Platkājis ◽  
Andrejs Kostiks ◽  
Guntis Karelis

Background and Objectives: A complex network of axonal pathways interlinks the human brain cortex. Brain networks are not distributed evenly, and brain regions making more connections with other parts are defined as brain hubs. Our objective was to analyze brain hub region volume and cortical thickness and determine the association with cognitive assessment scores in patients with mild cognitive impairment (MCI) and dementia. Materials and Methods: In this cross-sectional study, we included 11 patients (5 mild cognitive impairment; 6 dementia). All patients underwent neurological examination, and Montreal Cognitive Assessment (MoCA) test scores were recorded. Scans with a 3T MRI scanner were done, and cortical thickness and volumetric data were acquired using Freesurfer 7.1.0 software. Results: By analyzing differences between the MCI and dementia groups, MCI patients had higher hippocampal volumes (p < 0.05) and left entorhinal cortex thickness (p < 0.05). There was a significant positive correlation between MoCA test scores and left hippocampus volume (r = 0.767, p < 0.01), right hippocampus volume (r = 0.785, p < 0.01), right precuneus cortical thickness (r = 0.648, p < 0.05), left entorhinal cortex thickness (r = 0.767, p < 0.01), and right entorhinal cortex thickness (r = 0.612, p < 0.05). Conclusions: In our study, hippocampal volume and entorhinal cortex showed significant differences in the MCI and dementia patient groups. Additionally, we found a statistically significant positive correlation between MoCA scores, hippocampal volume, entorhinal cortex thickness, and right precuneus. Although other brain hub regions did not show statistically significant differences, there should be additional research to evaluate the brain hub region association with MCI and dementia.


2020 ◽  
Vol 16 (S5) ◽  
Author(s):  
Gengsheng Chen ◽  
Nicole Sarah McKay ◽  
Brian A. Gordon ◽  
Jingxia Liu ◽  
Aylin Dincer ◽  
...  

2005 ◽  
Vol 77 (11) ◽  
pp. 221 A-226 A ◽  
Author(s):  
Giselle B. Limentani ◽  
Moira C. Ringo ◽  
Feng Ye ◽  
Mandy L. Bergquist ◽  
Ellen O. MCSorley

2017 ◽  
Vol 43 (4) ◽  
pp. 407-439 ◽  
Author(s):  
Jodi M. Casabianca ◽  
Charles Lewis

The null hypothesis test used in differential item functioning (DIF) detection tests for a subgroup difference in item-level performance—if the null hypothesis of “no DIF” is rejected, the item is flagged for DIF. Conversely, an item is kept in the test form if there is insufficient evidence of DIF. We present frequentist and empirical Bayes approaches for implementing statistical equivalence testing for DIF using the Mantel–Haenszel (MH) DIF statistic. With these approaches, rejection of the null hypothesis of “DIF” allows the conclusion of statistical equivalence, a more stringent criterion for keeping items. In other words, the roles of the null and alternative hypotheses are interchanged in order to have positive evidence that the DIF of an item is small. A simulation study compares the equivalence testing approaches to the traditional MH DIF detection method with the Educational Testing Service classification system. We illustrate the methods with item response data from the 2012 Programme for International Student Assessment.


2020 ◽  
Author(s):  
Alberto Stefanelli ◽  
Martin Lukac

Conjoint analysis is an experimental technique that has become quite popular to understand people's decisions in multi-dimensional decision-making processes. Despite the importance of power analysis for experimental techniques, current literature has largely disregarded statistical power considerations when designing conjoint experiments. The main goal of this article is to provide researchers and practitioners with a practical tool to calculate the statistical power of conjoint experiments. To this end, we first conducted an extensive literature review to understand how conjoint experiments are designed and gauge the plausible effect sizes discovered in the literature. Second, we formulate a data generating model that is sufficiently flexible to accommodate a wide range of conjoint designs and hypothesized effects. Third, we present the results of an extensive series of simulation experiments based on the previously formulated data generation process. Our results show that---even with relatively large sample size and the number of trials---conjoint experiments are not suited to draw inferences for experiments with large numbers of experimental conditions and relatively small effect sizes. Specifically, Type S and Type M errors are especially pronounced for experimental designs with relatively small effective sample sizes (&lt; 3000) or a high number of levels (&gt; 15) that find small but statistically significant effects (&lt; 0.03). The proposed online tool based on the simulation results can be used by researchers to perform power analysis of their designs and hence achieve adequate design for future conjoint experiments.


2021 ◽  
Author(s):  
Heath R Pardoe ◽  
Samantha P Martin ◽  
Yijun Zhao ◽  
Allan George ◽  
Hui Yuan ◽  
...  

Introduction In-scanner head motion is a common cause of reduced image quality in neuroimaging, and causes systematic brain-wide changes in cortical thickness and volumetric estimates derived from structural MRI scans. There are currently no widely available methods for measuring head motion during structural MRI. Here, we train a deep learning predictive model to estimate changes in head pose using video obtained from an in-scanner eye tracker during an EPI-BOLD acquisition with participants undertaking deliberate in-scanner head movements. The predictive model was used to estimate head pose changes during structural MRI scans, and correlated with cortical thickness and subcortical volume estimates. Methods 21 healthy controls (age 32 ± 13 years, 11 female) were studied. Participants carried out a series of stereotyped prompted in-scanner head motions during acquisition of an EPI-BOLD sequence with simultaneous recording of eye tracker video. Motion-affected and motion-free whole brain T1-weighted MRI were also obtained. Image coregistration was used to estimate changes in head pose over the duration of the EPI-BOLD scan, and used to train a predictive model to estimate head pose changes from the video data. Model performance was quantified by assessing the coefficient of determination (R²). We evaluated the utility of our technique by assessing the relationship between video-based head pose changes during structural MRI and (i) vertex-wise cortical thickness and (ii) subcortical volume estimates. Results Video-based head pose estimates were significantly correlated with ground truth head pose changes estimated from EPI-BOLD imaging in a hold-out dataset. We observed a general brain-wide overall reduction in cortical thickness with increased head motion, with some isolated regions showing increased cortical thickness estimates with increased motion. Subcortical volumes were generally reduced in motion affected scans. Conclusions We trained a predictive model to estimate changes in head pose during structural MRI scans using in-scanner eye tracker video. The method is independent of individual image acquisition parameters and does not require markers to be to be fixed to the patient, suggesting it may be well suited to clinical imaging and research environments. Head pose changes estimated using our approach can be used as covariates for morphometric image analyses to improve the neurobiological validity of structural imaging studies of brain development and disease.


Sign in / Sign up

Export Citation Format

Share Document