scholarly journals IOCBIO Kinetics: An open-source software solution for analysis of data traces

2020 ◽  
Vol 16 (12) ◽  
pp. e1008475
Author(s):  
Marko Vendelin ◽  
Martin Laasmaa ◽  
Mari Kalda ◽  
Jelena Branovets ◽  
Niina Karro ◽  
...  

Biological measurements frequently involve measuring parameters as a function of time, space, or frequency. Later, during the analysis phase of the study, the researcher splits the recorded data trace into smaller sections, analyzes each section separately by finding a mean or fitting against a specified function, and uses the analysis results in the study. Here, we present the software that allows to analyze these data traces in a manner that ensures repeatability of the analysis and simplifies the application of FAIR (findability, accessibility, interoperability, and reusability) principles in such studies. At the same time, it simplifies the routine data analysis pipeline and gives access to a fast overview of the analysis results. For that, the software supports reading the raw data, processing the data as specified in the protocol, and storing all intermediate results in the laboratory database. The software can be extended by study- or hardware-specific modules to provide the required data import and analysis facilities. To simplify the development of the data entry web interfaces, that can be used to enter data describing the experiments, we released a web framework with an example implementation of such a site. The software is covered by open-source license and is available through several online channels.

2020 ◽  
Author(s):  
Lingyu Xu ◽  
Stanislau Hrybouski ◽  
Yuancheng Xu ◽  
Richard Coulden ◽  
Emer Sonnex ◽  
...  

ABSTRACTObjectivesThis study aimed to investigate a novel semi-automated three-dimensional (3D) quantification of the pericoronary epicardial adipose tissue radiodensity (PCATrd).MethodsTwenty-four subjects who previously underwent contrast-enhanced cardiac CT scans were retrospectively identified. The PCATrd was measured in ITK-SNAP imaging software using a Hounsfield unit threshold (−190,-3) to define epicardial adipose tissue (EAT). A spherical 3D brush tool was used on multiplanar reformatted images to segment the PCAT. We defined the PCATrd as EAT within the orthogonal distance from the coronary artery (CA) outer wall equal to the diameter of the corresponding CA segment. The segmentation followed the path of major CAs. Additionally, the PCAT of twenty-five calcified segments were segmented. Reliability of this novel segmentation protocol was assessed using Dice Similarity Coefficients (DSCs) and intraclass coefficient (ICC).ResultsThe segmentation reproducibility for the PCAT was high, with intraobserver DSC 0.86±0.04 for the full length of major CAs and 0.85±0.07 for the calcified segments, and interobserver DSC 0.84±0.04 for the full length of major CAs and 0.83±0.05 for the calcified segments. The reproducibility of the PCATrd value assessed by ICC was also excellent, with intraobserver ICC 0.99 for the full length of major CAs and 0.99 for the calcified segments, and interobserver ICC 0.99 for the full length of major CAs and 0.99 for the calcified segments.ConclusionsOur novel 3D PCATrd quantification technique is reliable and reproducible. The availability of the open source software and detailed image analysis pipeline will enable reliable replications and broad uptake of our technique.Key pointsWe have produced a novel, semiautomated technique to comprehensively quantify pericoronary epicardial adipose tissue radiodensity (PCATrd) which is a novel imaging biomarker of coronary inflammation.Our method of PCAT segmentation has excellent reproducibility.We use open source software and provide detailed image analysis pipeline of quantifying PCATrd, which will allow easy replication and broad uptake of our technique.


2007 ◽  
Vol 2007 ◽  
pp. 1-7 ◽  
Author(s):  
B. Jayashree ◽  
Manindra S. Hanspal ◽  
Rajgopal Srinivasan ◽  
R. Vigneshwaran ◽  
Rajeev K. Varshney ◽  
...  

The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs). In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.


2016 ◽  
Author(s):  
Catherine B. Carbone ◽  
Ronald D. Vale ◽  
Nico Stuurman

AbstractWe describe open source software and hardware tools for calibrating, acquiring, and analyzing images for scanning angle interference microscopy (SAIM) analysis. These tools make it possible for any user with a TIRF microscope equipped with a motorized illumination unit to generate reconstructed images with nanometer precision in the axial (z) direction and diffraction-limited resolution in the lateral (xy) plane.


Author(s):  
Tania Walisch ◽  
Claude Pepin ◽  
Paul Braun

Over the past 20 years, the Luxembourg National Museum for Natural History (LMNH) has built a bio- and geodiversity information system to collate, manage and publish natural heritage observation and specimen data on a national and international level. To date the system counts over 2 million taxon occurrence and over 100,000 specimen records. The Museum has chosen, whenever available, public or open source software tools complying to international biodiversity data standards for recording, managing and publishing data to increase resilience, stay connected with community initiatives and mutualise development costs. A central component of the Museum’s national data hub is Recorder 6, a client-server database software for wildlife recording developed by the National Biodiversity Network in the UK. Today, the Recorder-Lux database contains a large portion of natural heritage information in Luxembourg and is synchronised daily into a publication database connected via the Integrated Publishing Toolkit (IPT) to the Global Biodiversity Information Facility (GBIF). Moreover, Recorder-Lux data is accessible via the national species mapping portal mdata.mnhn.lu which has been developed in-house and is aimed at scientists, professionals and decision makers. The Museum has also developed a set of data entry and upload functionalities on its website data.mnhn.lu using the open source software Indicia, a toolkit that provides a ready-made set of services and tools for online wildlife recording. In 2019, we implement the Atlas of Living Luxembourg (ALL) website all.mnhn.lu, based on the open source Atlas of Living Australia software. ALL is the most comprehensive data portal about natural heritage in Luxembourg, showing specimen data from the museum’s botany, zoology, paleontology, petrology and mineralogy collections as well as fungi, animal and plant observations collected from national and international organisations (via GBIF). Data providers vary from individual scientific collaborators to professional regional record centers or private consultancies working for public administrations. They use different tools offered by the museum to enter, manage and transfer their data to the system. Thus several regional record centers chose the client-server Recorder 6 software to manage and exchange their data, whereas individual scientific collaborators of the Museum enter or upload their data via the online data entry forms available on data.mnhn.lu. For large-scale, long-term, professional biodiversity monitoring and inventories at the national level, specific data entry forms and functionalities have been configured on the Indicia website. Finally, citizens can record species observations via the iNaturalist smartphone app. Due to the museum’s long history of conducting field inventories alongside collating and managing natural history collections, the data hub holds observation and collection data in one database. In 2003, the Museum has developed the Collection Management and Thesaurus extensions for the Recorder 6 software to catalogue, describe and manage specimens in the Museum collections. It allows handling of field-gathered data alongside specimen-specific data such as storage location, specimen type and conservation status. In recent years this has become an essential tool for the increasing effort directed at the digitisation of the diverse natural history collections at the Museum. Our small database team faces the challenge of integrating an ever increasing number of records from a variety of datasets, tools and initiatives. To keep the technical and administrative work as simple as possible we have implemented an open data policy and aim to increase the use of IPT to connect databases instead of physically importing all data into one central database. To improve data quality we focus on training experts to work with our Indicia verification tool.


Author(s):  
Passakorn PHANNACHITTA ◽  
Akinori IHARA ◽  
Pijak JIRAPIWONG ◽  
Masao OHIRA ◽  
Ken-ichi MATSUMOTO

Author(s):  
Christina Dunbar-Hester

Hacking, as a mode of technical and cultural production, is commonly celebrated for its extraordinary freedoms of creation and circulation. Yet surprisingly few women participate in it: rates of involvement by technologically skilled women are drastically lower in hacking communities than in industry and academia. This book investigates the activists engaged in free and open-source software to understand why, despite their efforts, they fail to achieve the diversity that their ideals support. The book shows that within this well-meaning volunteer world, beyond the sway of human resource departments and equal opportunity legislation, members of underrepresented groups face unique challenges. The book explores who participates in voluntaristic technology cultures, to what ends, and with what consequences. Digging deep into the fundamental assumptions underpinning STEM-oriented societies, the book demonstrates that while the preferred solutions of tech enthusiasts—their “hacks” of projects and cultures—can ameliorate some of the “bugs” within their own communities, these methods come up short for issues of unequal social and economic power. Distributing “diversity” in technical production is not equal to generating justice. The book reframes questions of diversity advocacy to consider what interventions might appropriately broaden inclusion and participation in the hacking world and beyond.


Sign in / Sign up

Export Citation Format

Share Document