scholarly journals Sharing brain mapping statistical results with the neuroimaging data model

2016 ◽  
Vol 3 (1) ◽  
Author(s):  
Camille Maumet ◽  
Tibor Auer ◽  
Alexander Bowring ◽  
Gang Chen ◽  
Samir Das ◽  
...  
2016 ◽  
Author(s):  
Camille Maumet ◽  
Tibor Auer ◽  
Alexander Bowring ◽  
Gang Chen ◽  
Samir Das ◽  
...  

AbstractOnly a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html.


2013 ◽  
Vol 25 (6) ◽  
pp. 834-842 ◽  
Author(s):  
Joseph M. Moran ◽  
Jamil Zaki

Functional imaging has become a primary tool in the study of human psychology but is not without its detractors. Although cognitive neuroscientists have made great strides in understanding the neural instantiation of countless cognitive processes, commentators have sometimes argued that functional imaging provides little or no utility for psychologists. And indeed, myriad studies over the last quarter century have employed the technique of brain mapping—identifying the neural correlates of various psychological phenomena—in ways that bear minimally on psychological theory. How can brain mapping be made more relevant to behavioral scientists broadly? Here, we describe three trends that increase precisely this relevance: (i) the use of neuroimaging data to adjudicate between competing psychological theories through forward inference, (ii) isolating neural markers of information processing steps to better understand complex tasks and psychological phenomena through probabilistic reverse inference, and (iii) using brain activity to predict subsequent behavior. Critically, these new approaches build on the extensive tradition of brain mapping, suggesting that efforts in this area—although not initially maximally relevant to psychology—can indeed be used in ways that constrain and advance psychological theory.


2021 ◽  
Author(s):  
Ashmita Kumar

<p>The Neuroimaging Data Model (NIDM) was started by an international team of cognitive scientists, computer scientists and statisticians to develop a data format capable of describing all aspects of the data lifecycle, from raw data through analyses and provenance. NIDM was built on top of the PROV standard and consists of three main interconnected specifications: Experiment, Results, and Workflow. These specifications were envisioned to capture information on all aspects of the neuroimaging data lifecycle, using semantic web techniques. They provide a critical capability to aid in reproducibility and replication of studies, as well as data discovery in shared resources. The NIDM-Experiment component has been used to describe publicly-available human neuroimaging datasets (e.g. ABIDE, ADHD200, CoRR, and OpenNeuro datasets) along with providing unambiguous descriptions of the clinical, neuropsychological, and imaging data collected as part of those studies resulting in approximately 4.5 million statements about aspects of these datasets.</p><p>PyNIDM, a toolbox written in Python, supports the creation, manipulation, and query of NIDM documents. It is an open-source project hosted on GitHub and distributed under the Apache License, Version 2.0. PyNIDM is under active development and testing. Tools have been created to support RESTful SPARQL queries of the NIDM documents in support of users wanting to identify interesting cohorts across datasets in support of evaluating scientific hypotheses and/or replicating results found in the literature. This query functionality, together with the NIDM document semantics, provides a path for investigators to interrogate datasets, understand what data was collected in those studies, and provide sufficiently-annotated data dictionaries of the variables collected to facilitate transformation and combining of data across studies.</p><p>Beyond querying across NIDM documents, some high-level statistical analysis tools are needed to provide investigators with an opportunity to gain more insight into data they may be interested in combining for a complete scientific investigation. Here we report on one such tool providing linear modeling support for NIDM documents: nidm_linreg.</p>


2021 ◽  
Author(s):  
Ashmita Kumar

<p>The Neuroimaging Data Model (NIDM) was started by an international team of cognitive scientists, computer scientists and statisticians to develop a data format capable of describing all aspects of the data lifecycle, from raw data through analyses and provenance. NIDM was built on top of the PROV standard and consists of three main interconnected specifications: Experiment, Results, and Workflow. These specifications were envisioned to capture information on all aspects of the neuroimaging data lifecycle, using semantic web techniques. They provide a critical capability to aid in reproducibility and replication of studies, as well as data discovery in shared resources. The NIDM-Experiment component has been used to describe publicly-available human neuroimaging datasets (e.g. ABIDE, ADHD200, CoRR, and OpenNeuro datasets) along with providing unambiguous descriptions of the clinical, neuropsychological, and imaging data collected as part of those studies resulting in approximately 4.5 million statements about aspects of these datasets.</p><p>PyNIDM, a toolbox written in Python, supports the creation, manipulation, and query of NIDM documents. It is an open-source project hosted on GitHub and distributed under the Apache License, Version 2.0. PyNIDM is under active development and testing. Tools have been created to support RESTful SPARQL queries of the NIDM documents in support of users wanting to identify interesting cohorts across datasets in support of evaluating scientific hypotheses and/or replicating results found in the literature. This query functionality, together with the NIDM document semantics, provides a path for investigators to interrogate datasets, understand what data was collected in those studies, and provide sufficiently-annotated data dictionaries of the variables collected to facilitate transformation and combining of data across studies.</p><p>Beyond querying across NIDM documents, some high-level statistical analysis tools are needed to provide investigators with an opportunity to gain more insight into data they may be interested in combining for a complete scientific investigation. Here we report on one such tool providing linear modeling support for NIDM documents: nidm_linreg.</p>


2020 ◽  
Author(s):  
Tim Schäfer ◽  
Christine Ecker

AbstractSummaryWe introduce fsbrain, an R package for the visualization of neuroimaging data. The package can be used to visualize vertex-wise and region-wise morphometry data, parcellations, labels and statistical results on brain surfaces in three dimensions (3D). Voxel data can be displayed in lightbox mode. The fsbrain package offers various customization options and produces publication quality plots which can be displayed interactively, saved as bitmap images, or integrated into R notebooks.Availability and ImplementationThe software, source code and documentation are available under the MIT license at https://github.com/dfsp-spirit/fsbrain. Releases can be installed directly from the Comprehensive R Archive Network (CRAN)[email protected]


GigaScience ◽  
2016 ◽  
Vol 5 (suppl_1) ◽  
Author(s):  
Vanessa Sochat ◽  
B. Nolan Nichols
Keyword(s):  

2017 ◽  
Author(s):  
Shayan Shahand ◽  
Sílvia Olabarriaga

The lessons learned during six years of experience in design, development, and operation of four Science Gateway (SG) generations motivated us to develop yet another generation of platforms coined “Rosemary”. At the core of Rosemary the three fundamental SG functions, namely related to data, computing, and collaboration management, are integrated together. Our earlier studies showed that complete integration between these functions is a feature that is usually overlooked in the existing SG platforms. Rosemary provides a generic data model, RESTful API, and responsive UI that can be customized through programming to build customized SGs. Moreover, Rosemary is designed and implemented to be flexible to changes in e-Infrastructures and user community requirements. The software frameworks, tools and libraries employed in the realization of Rosemary streamline the development, deployment and operation of customized SGs for the users needs. The code of Rosemary is open source, available at https://github.com/AMCeScience/Rosemary-Vanilla. So far the platform has been used to implement prototypes of three SGs for high-throughput analysis and management of neuroimaging data, sharing of data in in-vitro fertilization research, and provenance tracking of DNA sequencing data. This paper presents the design considerations, data model, and system architecture of Rosemary and highlights some of the features that are intrinsic to its design and implementation with examples from the three prototypes.


2016 ◽  
Author(s):  
Shayan Shahand ◽  
Sílvia Olabarriaga

The lessons learned during six years of experience in design, development, and operation of four Science Gateway (SG) generations motivated us to develop yet another generation of platforms coined “Rosemary”. At the core of Rosemary the three fundamental SG functions, namely related to data, computing, and collaboration management, are integrated together. Our earlier studies showed that complete integration between these functions is a feature that is usually overlooked in the existing SG platforms. Rosemary provides a generic data model, RESTful API, and responsive UI that can be customized through programming to build customized SGs. Moreover, Rosemary is designed and implemented to be flexible to changes in e-Infrastructures and user community requirements. The software frameworks, tools and libraries employed in the realization of Rosemary streamline the development, deployment and operation of customized SGs for the users needs. The code of Rosemary is open source, available at https://github.com/AMCeScience/Rosemary-Vanilla. So far the platform has been used to implement prototypes of three SGs for high-throughput analysis and management of neuroimaging data, sharing of data in in-vitro fertilization research, and provenance tracking of DNA sequencing data. This paper presents the design considerations, data model, and system architecture of Rosemary and highlights some of the features that are intrinsic to its design and implementation with examples from the three prototypes.


2016 ◽  
Vol 10 ◽  
Author(s):  
Keator David ◽  
Helmer Karl ◽  
Ghosh Satrajit ◽  
Auer Tibor ◽  
Maumet Camille ◽  
...  
Keyword(s):  

2013 ◽  
Vol 385-386 ◽  
pp. 1764-1770
Author(s):  
Yu Wei Gao ◽  
Xia Hou ◽  
Ning Li

For the purpose of measuring document interoperability, a Feature Data Model (FDM) of open office document format is proposed to deal with documents. Feature Data is defined as a document container, which contains a number of document features. Each data object in the container meets different document standards and specifications. By using FDM, the instance documents establish the mapping relationships of features between different formats. Then use Feature Data as assistance to measure interoperability and calculate the statistical results automatically. The Documents Interoperability Measuring System (DIMS) mentioned in this paper is implemented by JAVA to prove the feasibility of this model and architecture.


Sign in / Sign up

Export Citation Format

Share Document