scholarly journals Intuitive Web-Based Experimental Design for High-Throughput Biomedical Data

2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Andreas Friedrich ◽  
Erhan Kenar ◽  
Oliver Kohlbacher ◽  
Sven Nahnsen

Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.

Author(s):  
Manisha Sritharan ◽  
Farhat A. Avin

Biological big data represents a vast amount of data in bioinformatics and this could lead to the transformation of the research pattern into large scale. In medical research, a large amount of data can be generated from tools including genomic sequencing machines. The availability of advanced tools and modern technology has become the main reason for the expansion of biological data in a huge amount. Such immense data should be utilized in an efficient manner in order to distribute this valuable information. Besides that, storing and dealing with those big data has become a great challenge as the data generation are tremendously increasing over years. As well, the blast of data in healthcare systems and biomedical research appeal for an immediate solution as health care requires a compact integration of biomedical data. Thus, researchers should make use of this available big data for analysis rather than keep creating new data as they could provide meaningful information with the use of current advanced bioinformatics tools.


Web Services ◽  
2019 ◽  
pp. 953-978
Author(s):  
Krishnan Umachandran ◽  
Debra Sharon Ferdinand-James

Continued technological advancements of the 21st Century afford massive data generation in sectors of our economy to include the domains of agriculture, manufacturing, and education. However, harnessing such large-scale data, using modern technologies for effective decision-making appears to be an evolving science that requires knowledge of Big Data management and analytics. Big data in agriculture, manufacturing, and education are varied such as voluminous text, images, and graphs. Applying Big data science techniques (e.g., functional algorithms) for extracting intelligence data affords decision markers quick response to productivity, market resilience, and student enrollment challenges in today's unpredictable markets. This chapter serves to employ data science for potential solutions to Big Data applications in the sectors of agriculture, manufacturing and education to a lesser extent, using modern technological tools such as Hadoop, Hive, Sqoop, and MongoDB.


Author(s):  
Krishnan Umachandran ◽  
Debra Sharon Ferdinand-James

Continued technological advancements of the 21st Century afford massive data generation in sectors of our economy to include the domains of agriculture, manufacturing, and education. However, harnessing such large-scale data, using modern technologies for effective decision-making appears to be an evolving science that requires knowledge of Big Data management and analytics. Big data in agriculture, manufacturing, and education are varied such as voluminous text, images, and graphs. Applying Big data science techniques (e.g., functional algorithms) for extracting intelligence data affords decision markers quick response to productivity, market resilience, and student enrollment challenges in today's unpredictable markets. This chapter serves to employ data science for potential solutions to Big Data applications in the sectors of agriculture, manufacturing and education to a lesser extent, using modern technological tools such as Hadoop, Hive, Sqoop, and MongoDB.


2011 ◽  
Vol 7 (S285) ◽  
pp. 340-341
Author(s):  
Dayton L. Jones ◽  
Kiri Wagstaff ◽  
David Thompson ◽  
Larry D'Addario ◽  
Robert Navarro ◽  
...  

AbstractThe detection of fast (< 1 second) transient signals requires a challenging balance between the need to examine vast quantities of high time-resolution data and the impracticality of storing all the data for later analysis. This is the epitome of a “big data” issue—far more data will be produced by next generation-astronomy facilities than can be analyzed, distributed, or archived using traditional methods. JPL is developing technologies to deal with “big data” problems from initial data generation through real-time data triage algorithms to large-scale data archiving and mining. Although most current work is focused on the needs of large radio arrays, the technologies involved are widely applicable in other areas.


2009 ◽  
Vol 10 (S9) ◽  
Author(s):  
Ronilda Lacson ◽  
Erik Pitzer ◽  
Christian Hinske ◽  
Pedro Galante ◽  
Lucila Ohno-Machado

2018 ◽  
Author(s):  
Harold Fellermann ◽  
Ben Shirt-Ediss ◽  
Jerzy W. Koryza ◽  
Matthew Linsley ◽  
Dennis W. Lendrem ◽  
...  

Our PCR Simulator is a web-based application designed to introduce concepts of multi-factorial experimental design and support teaching of the polymerase chain reaction. Learners select experimental settings and receive results of their simulated reactions quickly, allowing rapid iteration between data generation and analysis. This enables the student to perform complex iterative experimental design strategies within a short teaching session. Here we provide a short overview of the user interface and underpinning model, and describe our experience using this tool in a teaching environment.


2020 ◽  
Author(s):  
Hualin Liu ◽  
Jinshui Zheng ◽  
Dexin Bo ◽  
Yun Yu ◽  
Weixing Ye ◽  
...  

SummaryBacillus thuringiensis (Bt) which is a spore-forming gram-positive bacterium, has been used as the most successful microbial pesticide for decades. Its toxin genes (cry) have been successfully used for the development of GM crops against pests. We have previously developed a web-based insecticidal gene mining tool BtToxin_scanner, which has been proved to be the most important method for mining cry genes from Bt genome sequences. To facilitate efficiently mining major toxin genes and novel virulence factors from large-scale Bt genomic data, we re-design this tool with a new workflow. Here we present BtToxin_Digger, a comprehensive, high-throughput, and easy-to-use Bt toxin mining tool. It runs fast and can get rich, accurate, and useful results for downstream analysis and experiment designs. Moreover, it can also be used to mine other targeting genes from large-scale genome and metagenome data with the addition of other query sequences.Availability and ImplementationThe BtToxin_Digger codes and instructions are freely available at https://github.com/BMBGenomics/BtToxin_Digger. A web server of BtToxin_Digger can be found at http://bcam.hzau.edu.cn/[email protected]; [email protected].


Author(s):  
Vijander Singh ◽  
Amit Kumar Bairwa ◽  
Deepak Sinwar

In the development of the advanced world, information has been created each second in numerous regions like astronomy, social locales, medical fields, transportation, web-based business, logical research, horticulture, video, and sound download. As per an overview, in 60 seconds, 600+ new clients on YouTube and 7 billion queries are executed on Google. In this way, we can say that the immense measure of organized, unstructured, and semi-organized information are produced each second around the cyber world, which should be managed efficiently. Big data conveys properties such as unpredictability, 'V' factor, multivariable information, and it must be put away, recovered, and dispersed. Logical arranged data may work as information in the field of digital world. In the past century, the sources of data as to size were very limited and could be managed using pen and paper. The next generation of data generation tools include Microsoft Excel, Access, and database tools like SQL, MySQL, and DB2.


2021 ◽  
Vol 12 (15) ◽  
pp. 5566-5573
Author(s):  
Salini Senthil ◽  
Sabyasachi Chakraborty ◽  
Raghunathan Ramakrishnan

A high-throughput workflow for connectivity preserving geometry optimization minimizes unintended structural rearrangements during quantum chemistry big data generation.


2020 ◽  
Vol 39 (5) ◽  
pp. 397-421
Author(s):  
Charlene Andraos ◽  
Il Je Yu ◽  
Mary Gulumian

Despite several studies addressing nanoparticle (NP) interference with conventional toxicity assay systems, it appears that researchers still rely heavily on these assays, particularly for high-throughput screening (HTS) applications in order to generate “big” data for predictive toxicity approaches. Moreover, researchers often overlook investigating the different types of interference mechanisms as the type is evidently dependent on the type of assay system implemented. The approaches implemented in the literature appear to be not adequate as it often addresses only one type of interference mechanism with the exclusion of others. For example, interference of NPs that have entered cells would require intracellular assessment of their interference with fluorescent dyes, which has so far been neglected. The present study investigated the mechanisms of interference of gold NPs and silver NPs in assay systems implemented in HTS including optical interference as well as adsorption or catalysis. The conventional assays selected cover all optical read-out systems, that is, absorbance (XTT toxicity assay), fluorescence (CytoTox-ONE Homogeneous membrane integrity assay), and luminescence (CellTiter Glo luminescent assay). Furthermore, this study demonstrated NP quenching of fluorescent dyes also used in HTS (2′,7′-dichlorofluorescein, propidium iodide, and 5,5′,6,6′-tetrachloro-1,1′,3,3′-tetraethyl-benzamidazolocarbocyanin iodide). To conclude, NP interference is, as such, not a novel concept, however, ignoring this aspect in HTS may jeopardize attempts in predictive toxicology. It should be mandatory to report the assessment of all mechanisms of interference within HTS, as well as to confirm results with label-free methodologies to ensure reliable big data generation for predictive toxicology.


Sign in / Sign up

Export Citation Format

Share Document