scholarly journals Mindcontrol: A Web Application for Brain Segmentation Quality Control

2016 ◽  
Author(s):  
Anisha Keshavan ◽  
Esha Datta ◽  
Ian McDonough ◽  
Christopher R. Madan ◽  
Kesshi Jordan ◽  
...  

AbstractTissue classification plays a crucial role in the investigation of normal neural development, brain-behavior relationships, and the disease mechanisms of many psychiatric and neurological illnesses. Ensuring the accuracy of tissue classification is important for quality research and, in particular, the translation of imaging biomarkers to clinical practice. Assessment with the human eye is vital to correct various errors inherent to all currently available segmentation algorithms. Manual quality assurance becomes methodologically difficult at a large scale - a problem of increasing importance as the number of data sets is on the rise. To make this process more efficient, we have developed Mindcontrol, an open-source web application for the collaborative quality control of neuroimaging processing outputs. The Mindcontrol platform consists of a dashboard to organize data, descriptive visualizations to explore the data, an imaging viewer, and an in-browser annotation and editing toolbox for data curation and quality control. Mindcontrol is flexible and can be configured for the outputs of any software package in any data organization structure. Example configurations for three large, open-source datasets are presented: the 1000 Functional Connectomes Project (FCP), the Consortium for Reliability and Reproducibility (CoRR), and the Autism Brain Imaging Data Exchange (ABIDE) Collection. These demo applications link descriptive quality control metrics, regional brain volumes, and thickness scalars to a 3D imaging viewer and editing module, resulting in an easy-to-implement quality control protocol that can be scaled for any size and complexity of study.

2019 ◽  
Author(s):  
Horea-Ioan Ioanas ◽  
Markus Marks ◽  
Clément M. Garin ◽  
Marc Dhenain ◽  
Mehmet Fatih Yanik ◽  
...  

AbstractLarge-scale research integration is contingent on seamless access to data in standardized formats. Standards enable researchers to understand external experiment structures, pool results, and apply homogeneous preprocessing and analysis workflows. Particularly, they facilitate these features without the need for numerous potentially confounding compatibility add-ons. In small animal magnetic resonance imaging, an overwhelming proportion of data is acquired via the ParaVision software of the Bruker Corporation. The original data structure is predominantly transparent, but fundamentally incompatible with modern pipelines. Additionally, it sources metadata from free-field operator input, which diverges strongly between laboratories and researchers. In this article we present an open-source workflow which automatically converts and reposits data from the ParaVision structure into the widely supported and openly documented Brain Imaging Data Structure (BIDS). Complementing this workflow we also present operator guidelines for appropriate ParaVision data input, and a programmatic walk-through detailing how preexisting scans with uninterpretable metadata records can easily be made compliant after the acquisition.


2020 ◽  
pp. 336-345 ◽  
Author(s):  
Erik Ziegler ◽  
Trinity Urban ◽  
Danny Brown ◽  
James Petts ◽  
Steve D. Pieper ◽  
...  

PURPOSE Zero-footprint Web architecture enables imaging applications to be deployed on premise or in the cloud without requiring installation of custom software on the user’s computer. Benefits include decreased costs and information technology support requirements, as well as improved accessibility across sites. The Open Health Imaging Foundation (OHIF) Viewer is an extensible platform developed to leverage these benefits and address the demand for open-source Web-based imaging applications. The platform can be modified to support site-specific workflows and accommodate evolving research requirements. MATERIALS AND METHODS The OHIF Viewer provides basic image review functionality (eg, image manipulation and measurement) as well as advanced visualization (eg, multiplanar reformatting). It is written as a client-only, single-page Web application that can easily be embedded into third-party applications or hosted as a standalone Web site. The platform provides extension points for software developers to include custom tools and adapt the system for their workflows. It is standards compliant and relies on DICOMweb for data exchange and OpenID Connect for authentication, but it can be configured to use any data source or authentication flow. Additionally, the user interface components are provided in a standalone component library so that developers can create custom extensions. RESULTS The OHIF Viewer and its underlying components have been widely adopted and integrated into multiple clinical research platforms (e,g Precision Imaging Metrics, XNAT, LabCAS, ISB-CGC) and commercial applications (eg, Osirix). It has also been used to build custom imaging applications (eg, ProstateCancer.ai, Crowds Cure Cancer [presented as a case study]). CONCLUSION The OHIF Viewer provides a flexible framework for building applications to support imaging research. Its adoption could reduce redundancies in software development for National Cancer Institute–funded projects, including Informatics Technology for Cancer Research and the Quantitative Imaging Network.


2013 ◽  
Vol 19 (6) ◽  
pp. 659-667 ◽  
Author(s):  
A Di Martino ◽  
C-G Yan ◽  
Q Li ◽  
E Denio ◽  
F X Castellanos ◽  
...  

2006 ◽  
Vol 1 ◽  
pp. 44-55
Author(s):  
Jan Pytel

C++ language was used for creating web applications at the department of Mapping and Cartography for many years. Plenty of projects started to be very large-scale and complicated to maintain. Consequently, the traditional way of adding functionality to a Web Server which previously has been used (CGI programs) started being usefulness. I was looking for some solutions - particularly open source ones. I have tried many languages (solutions) and finally I chose the Java language and started writing servlets. Using the Java language (servlets) has significantly simplified the development of web applications. As a result, developing cycle was cut down. Because of Java JNI (Java Native Interface) it is still possible to use C++ libraries which we are using. The main goal of this article is to share my practical experiences with rewriting typical CGI web application and creating complex geoinformatic web application.


2019 ◽  
Author(s):  
Erik C. Johnson ◽  
Miller Wilt ◽  
Luis M. Rodriguez ◽  
Raphael Norman-Tenazas ◽  
Corban Rivera ◽  
...  

ABSTRACTEmerging neuroimaging datasets (collected through modalities such as Electron Microscopy, Calcium Imaging, or X-ray Microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational expertise or resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. We developed an ecosystem of neuroimaging data analysis pipelines that utilize open source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, that connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets, but may be applied to similar problems in other domains.


Author(s):  
Nikhil Bhagwat ◽  
Amadou Barry ◽  
Erin W. Dickie ◽  
Shawn T. Brown ◽  
Gabriel A. Devenyi ◽  
...  

The choice of preprocessing pipeline introduces variability in neuroimaging analyses that affects the reproducibility of scientific findings. Features derived from structural and functional MR imaging data are sensitive to the algorithmic or parametric differences of preprocessing tasks, such as image normalization, registration, and segmentation to name a few. Therefore it is critical to understand and potentially mitigate the cumulative biases of pipelines in order to distinguish biological effects from methodological variance. Here we use an open structural MR imaging dataset (ABIDE), supplemented with the Human Connectome Project (HCP), to highlight the impact of pipeline selection on cortical thickness measures. Specifically, we investigate the effect of 1) software tool (e.g. ANTs, CIVET, FreeSurfer), 2) cortical parcellation (DKT, Destrieux, Glasser), and 3) quality control procedure (manual, automatic). We divide our statistical analyses by 1) method type, i.e. task-free (unsupervised) versus task-driven (supervised), and 2) inference objective, i.e. neurobiological group differences versus individual prediction. Results show that software, parcellation, and quality control significantly impact task-driven neurobiological inference. Additionally, software selection strongly impacts neurobiological and individual task-free analyses, and quality control alters the performance for the individual-centric prediction tasks. This comparative performance evaluation partially explains the source of inconsistencies in neuroimaging findings. Furthermore, it underscores the need for more rigorous scientific workflows and accessible informatics resources to replicate and compare preprocessing pipelines to address the compounding problem of reproducibility in the age of large-scale, data-driven computational neuroscience.


GigaScience ◽  
2020 ◽  
Vol 9 (12) ◽  
Author(s):  
Erik C Johnson ◽  
Miller Wilt ◽  
Luis M Rodriguez ◽  
Raphael Norman-Tenazas ◽  
Corban Rivera ◽  
...  

Abstract Background Emerging neuroimaging datasets (collected with imaging techniques such as electron microscopy, optical microscopy, or X-ray microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. Results We developed an ecosystem of neuroimaging data analysis pipelines that use open-source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, which connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. Conclusions Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets but may be applied to similar problems in other domains.


2020 ◽  
Author(s):  
Amir Reza Sadri ◽  
Andrew Janowczyk ◽  
Ren Zhou ◽  
Ruchika Verma ◽  
Niha Beig ◽  
...  

2021 ◽  
Vol 10 (14) ◽  
pp. 3020
Author(s):  
Ylenia Bartolacelli ◽  
Andrea Barbieri ◽  
Francesco Antonini-Canterin ◽  
Mauro Pepi ◽  
Ines Monte ◽  
...  

Stress echo (SE) 2030 study is an international, prospective, multicenter cohort study that will include >10,000 patients from ≥20 centers from ≥10 countries. It represents the logical and chronological continuation of the SE 2020 study, which developed, validated, and disseminated the “ABCDE protocol” of SE, more suitable than conventional SE to describe the complex vulnerabilities of the contemporary patient within and beyond coronary artery disease. SE2030 was started with a recruitment plan from 2021 to 2025 (and follow-up to 2030) with 12 subprojects (ranging from coronary artery disease to valvular and post-COVID-19 patients). With these features, the study poses particular challenges on quality control assurance, methodological harmonization, and data management. One of the significant upgrades of SE2030 compared to SE2020 was developing and implementing a Research Electronic Data Capture (REDCap)-based infrastructure for interactive and entirely web-based data management to integrate and optimize reproducible clinical research data. The purposes of our paper were: first, to describe the methodology used for quality control of imaging data, and second, to present the informatic infrastructure developed on RedCap platform for data entry, storage, and management in a large-scale multicenter study.


2020 ◽  
Author(s):  
Guillaume Lobet ◽  
Charlotte Descamps ◽  
Lola Leveau ◽  
Alain Guillet ◽  
Jean-François Rees

AbstractLearning biology, and in particular systematics, requires learning a substantial amount of specific vocabulary, both for botanical and zoological studies. While crucial, the precise identification of structures serving as evolutionary traits and systematic criteria is not per se a highly motivating task for students. Teaching this in a traditional teaching setting is quite challenging especially with a large crowd of students to be kept engaged. This is even more difficult if, as during the COVID-19 crisis, students are not allowed to access laboratories for hands-on observation on fresh specimens and sometimes restricted to short-range movements outside their home.Here we present QuoVidi, a new open-source web platform for the organisation of large scale treasure hunts. The platform works as follows: students, organised in teams, receive a list of quests that contain morphologic, ecologic or systematic terms. They have to first understand the meaning of the quests, then go and find them in the environment. Once they find the organism corresponding to a quest, they upload a geotagged picture of their finding and submit this on the platform. The correctness of each submission is evaluated by the staff. During the COVID-19 lockdown, previously validated pictures were also submitted for evaluation to students that were locked in low-biodiversity areas. From a research perspective, the system enables the creation of large image databases by the students, similar to citizen-science projects.Beside the enhanced motivation of students to learn the vocabulary and perform observations on self-found specimens, this system allows faculties to remotely follow and assess the work performed by large numbers of students. The interface is freely available, open-source and customizable. It can be used in other disciplines with adapted quests and we expect it to be of interest in many classroom settings.


Sign in / Sign up

Export Citation Format

Share Document