scholarly journals Technical Note: MRQy — An open‐source tool for quality control of MR imaging data

2020 ◽  
Author(s):  
Amir Reza Sadri ◽  
Andrew Janowczyk ◽  
Ren Zhou ◽  
Ruchika Verma ◽  
Niha Beig ◽  
...  
Author(s):  
Pradeep Reddy Raamana ◽  
Athena Theyers ◽  
Tharushan Selliah ◽  
Piali Bhati ◽  
Stephen R. Arnott ◽  
...  

AbstractQuality control of morphometric neuroimaging data is essential to improve reproducibility. Owing to the complexity of neuroimaging data and subsequently the interpretation of their results, visual inspection by trained raters is the most reliable way to perform quality control. Here, we present a protocol for visual quality control of the anatomical accuracy of FreeSurfer parcellations, based on an easy to use open source tool called VisualQC. We comprehensively evaluate its utility in terms of error detection rate and inter-rater reliability on two large multi-site datasets, and discuss site differences in error patterns. This evaluation shows that VisualQC is a practically viable protocol for community adoption.


2018 ◽  
Vol 14 (3) ◽  
pp. e1006054 ◽  
Author(s):  
Juan Prada ◽  
Manju Sasi ◽  
Corinna Martin ◽  
Sibylle Jablonka ◽  
Thomas Dandekar ◽  
...  

2016 ◽  
Vol 43 (9) ◽  
pp. 5155-5160 ◽  
Author(s):  
Paolo Zaffino ◽  
Patrik Raudaschl ◽  
Karl Fritscher ◽  
Gregory C. Sharp ◽  
Maria Francesca Spadea

2016 ◽  
Author(s):  
Anisha Keshavan ◽  
Esha Datta ◽  
Ian McDonough ◽  
Christopher R. Madan ◽  
Kesshi Jordan ◽  
...  

AbstractTissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study.


Author(s):  
Nikhil Bhagwat ◽  
Amadou Barry ◽  
Erin W. Dickie ◽  
Shawn T. Brown ◽  
Gabriel A. Devenyi ◽  
...  

The choice of preprocessing pipeline introduces variability in neuroimaging analyses that affects the reproducibility of scientific findings. Features derived from structural and functional MR imaging data are sensitive to the algorithmic or parametric differences of preprocessing tasks, such as image normalization, registration, and segmentation to name a few. Therefore it is critical to understand and potentially mitigate the cumulative biases of pipelines in order to distinguish biological effects from methodological variance. Here we use an open structural MR imaging dataset (ABIDE), supplemented with the Human Connectome Project (HCP), to highlight the impact of pipeline selection on cortical thickness measures. Specifically, we investigate the effect of 1) software tool (e.g. ANTs, CIVET, FreeSurfer), 2) cortical parcellation (DKT, Destrieux, Glasser), and 3) quality control procedure (manual, automatic). We divide our statistical analyses by 1) method type, i.e. task-free (unsupervised) versus task-driven (supervised), and 2) inference objective, i.e. neurobiological group differences versus individual prediction. Results show that software, parcellation, and quality control significantly impact task-driven neurobiological inference. Additionally, software selection strongly impacts neurobiological and individual task-free analyses, and quality control alters the performance for the individual-centric prediction tasks. This comparative performance evaluation partially explains the source of inconsistencies in neuroimaging findings. Furthermore, it underscores the need for more rigorous scientific workflows and accessible informatics resources to replicate and compare preprocessing pipelines to address the compounding problem of reproducibility in the age of large-scale, data-driven computational neuroscience.


Author(s):  
Andrea Giovannucci ◽  
Johannes Friedrich ◽  
Pat Gunn ◽  
Jérémie Kalfon ◽  
Brandon L Brown ◽  
...  

2020 ◽  
pp. 100001
Author(s):  
Wilko Heitkoetter ◽  
Bruno U. Schyska ◽  
Danielle Schmidt ◽  
Wided Medjroubi ◽  
Thomas Vogt ◽  
...  

Author(s):  
Erin Polka ◽  
Ellen Childs ◽  
Alexa Friedman ◽  
Kathryn S. Tomsho ◽  
Birgit Claus Henn ◽  
...  

Sharing individualized results with health study participants, a practice we and others refer to as “report-back,” ensures participant access to exposure and health information and may promote health equity. However, the practice of report-back and the content shared is often limited by the time-intensive process of personalizing reports. Software tools that automate creation of individualized reports have been built for specific studies, but are largely not open-source or broadly modifiable. We created an open-source and generalizable tool, called the Macro for the Compilation of Report-backs (MCR), to automate compilation of health study reports. We piloted MCR in two environmental exposure studies in Massachusetts, USA, and interviewed research team members (n = 7) about the impact of MCR on the report-back process. Researchers using MCR created more detailed reports than during manual report-back, including more individualized numerical, text, and graphical results. Using MCR, researchers saved time producing draft and final reports. Researchers also reported feeling more creative in the design process and more confident in report-back quality control. While MCR does not expedite the entire report-back process, we hope that this open-source tool reduces the barriers to personalizing health study reports, promotes more equitable access to individualized data, and advances self-determination among participants.


Sign in / Sign up

Export Citation Format

Share Document