scholarly journals An Introduction to Systems Analytics and Integration of Big Omics Data

Genes ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 245 ◽  
Author(s):  
Gary Hardiman

A major technological shift in the research community in the past decade has been the adoption of high throughput (HT) technologies to interrogate the genome, epigenome, transcriptome, and proteome in a massively parallel fashion [...]

2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Kalpana Raja ◽  
Matthew Patrick ◽  
Yilin Gao ◽  
Desmond Madu ◽  
Yuyang Yang ◽  
...  

In the past decade, the volume of “omics” data generated by the different high-throughput technologies has expanded exponentially. The managing, storing, and analyzing of this big data have been a great challenge for the researchers, especially when moving towards the goal of generating testable data-driven hypotheses, which has been the promise of the high-throughput experimental techniques. Different bioinformatics approaches have been developed to streamline the downstream analyzes by providing independent information to interpret and provide biological inference. Text mining (also known as literature mining) is one of the commonly used approaches for automated generation of biological knowledge from the huge number of published articles. In this review paper, we discuss the recent advancement in approaches that integrate results from omics data and information generated from text mining approaches to uncover novel biomedical information.


Nanoscale ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 7294-7307
Author(s):  
Rasoul Khaledialidusti ◽  
Mohammad Khazaei ◽  
Somayeh Khazaei ◽  
Kaoru Ohno

The rush to synthesize novel two-dimensional (2D) materials has excited the research community studying ternary-layered carbide and nitride compounds, known as MAX phases, for the past two decades in the quest to develop new 2D material precursors.


2020 ◽  
Author(s):  
Martín Gutiérrez ◽  
Yerko Ortiz ◽  
Javier Carrión

ABSTRACTMetaheuristic procedures (MH) have been a trend driving Artificial Intelligence (AI) researchers for the past 50 years. A variety of tools and applications (not only in Computer Science) stem from these techniques. Also, MH frequently rely on evolution, a trademark process involved in cell colony growth. Generally, MH are used to approximate the solution to difficult problems but require a large amount of computational resources. Cell colonies harboring synthetic distributed circuits using intercell communication offer a direction for tackling this problem, as they process information in a massively parallel fashion. In this work, we propose a framework that maps MH elements to synthetic circuits in growing cell colonies. The framework relies on cell-cell communication mechanisms such as quorum sensing (QS) and bacterial conjugation. As a proof-of-concept, we also implemented the workflow associated to the framework, and tested the execution of two specific MH (Genetic Algorithms and Simulated Annealing) encoded as synthetic circuits on the gro simulator. Furthermore, we show an example of how our framework can be extended by implementing another kind of computational model: The Cellular Automaton. This work seeks to lay the foundations of mappings for implementing AI algorithms in a general manner using Synthetic Biology constructs in cell colonies.


2019 ◽  
Vol 26 (13) ◽  
pp. 2330-2355 ◽  
Author(s):  
Anutthaman Parthasarathy ◽  
Sasikala K. Anandamma ◽  
Karunakaran A. Kalesh

Peptide therapeutics has made tremendous progress in the past decade. Many of the inherent weaknesses of peptides which hampered their development as therapeutics are now more or less effectively tackled with recent scientific and technological advancements in integrated drug discovery settings. These include recent developments in synthetic organic chemistry, high-throughput recombinant production strategies, highresolution analytical methods, high-throughput screening options, ingenious drug delivery strategies and novel formulation preparations. Here, we will briefly describe the key methodologies and strategies used in the therapeutic peptide development processes with selected examples of the most recent developments in the field. The aim of this review is to highlight the viable options a medicinal chemist may consider in order to improve a specific pharmacological property of interest in a peptide lead entity and thereby rationally assess the therapeutic potential this class of molecules possesses while they are traditionally (and incorrectly) considered ‘undruggable’.


Author(s):  
Jeasik Cho

This book provides the qualitative research community with some insight on how to evaluate the quality of qualitative research. This topic has gained little attention during the past few decades. We, qualitative researchers, read journal articles, serve on masters’ and doctoral committees, and also make decisions on whether conference proposals, manuscripts, or large-scale grant proposals should be accepted or rejected. It is assumed that various perspectives or criteria, depending on various paradigms, theories, or fields of discipline, have been used in assessing the quality of qualitative research. Nonetheless, until now, no textbook has been specifically devoted to exploring theories, practices, and reflections associated with the evaluation of qualitative research. This book constructs a typology of evaluating qualitative research, examines actual information from websites and qualitative journal editors, and reflects on some challenges that are currently encountered by the qualitative research community. Many different kinds of journals’ review guidelines and available assessment tools are collected and analyzed. Consequently, core criteria that stand out among these evaluation tools are presented. Readers are invited to join the author to confidently proclaim: “Fortunately, there are commonly agreed, bold standards for evaluating the goodness of qualitative research in the academic research community. These standards are a part of what is generally called ‘scientific research.’ ”


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yiming Chen ◽  
Chi Chen ◽  
Chen Zheng ◽  
Shyam Dwaraknath ◽  
Matthew K. Horton ◽  
...  

AbstractThe L-edge X-ray Absorption Near Edge Structure (XANES) is widely used in the characterization of transition metal compounds. Here, we report the development of a database of computed L-edge XANES using the multiple scattering theory-based FEFF9 code. The initial release of the database contains more than 140,000 L-edge spectra for more than 22,000 structures generated using a high-throughput computational workflow. The data is disseminated through the Materials Project and addresses a critical need for L-edge XANES spectra among the research community.


Author(s):  
Stella C. Yuan ◽  
Eric Malekos ◽  
Melissa T. R. Hawkins

AbstractThe use of museum specimens held in natural history repositories for population and conservation genetic research is increasing in tandem with the use of massively parallel sequencing technologies. Short Tandem Repeats (STRs), or microsatellite loci, are commonly used genetic markers in wildlife and population genetic studies. However, they traditionally suffered from a host of issues including length homoplasy, high costs, low throughput, and difficulties in reproducibility across laboratories. Massively parallel sequencing technologies can address these problems, but the incorporation of museum specimen derived DNA suffers from significant fragmentation and exogenous DNA contamination. Combatting these issues requires extra measures of stringency in the lab and during data analysis, yet there have not been any high-throughput sequencing studies evaluating microsatellite allelic dropout from museum specimen extracted DNA. In this study, we evaluate genotyping errors derived from mammalian museum skin DNA extracts for previously characterized microsatellites across PCR replicates utilizing high-throughput sequencing. We found it useful to classify samples based on DNA concentration, which determined the rate by which genotypes were accurately recovered. Longer microsatellites performed worse in all museum specimens. Allelic dropout rates across loci were dependent on sample quantity, with high concentration museum specimens performing as well and recovering quality metrics nearly as high as the frozen tissue sample. Based on our results, we provide a set of best practices for quality assurance and incorporation of reliable genotypes from museum specimens.


2020 ◽  
Vol 10 (18) ◽  
pp. 6553
Author(s):  
Sabrina Azzi ◽  
Stéphane Gagnon ◽  
Alex Ramirez ◽  
Gregory Richards

Healthcare is considered as one of the most promising application areas for artificial intelligence and analytics (AIA) just after the emergence of the latter. AI combined to analytics technologies is increasingly changing medical practice and healthcare in an impressive way using efficient algorithms from various branches of information technology (IT). Indeed, numerous works are published every year in several universities and innovation centers worldwide, but there are concerns about progress in their effective success. There are growing examples of AIA being implemented in healthcare with promising results. This review paper summarizes the past 5 years of healthcare applications of AIA, across different techniques and medical specialties, and discusses the current issues and challenges, related to this revolutionary technology. A total of 24,782 articles were identified. The aim of this paper is to provide the research community with the necessary background to push this field even further and propose a framework that will help integrate diverse AIA technologies around patient needs in various healthcare contexts, especially for chronic care patients, who present the most complex comorbidities and care needs.


2020 ◽  
Author(s):  
Anna M. Sozanska ◽  
Charles Fletcher ◽  
Dóra Bihary ◽  
Shamith A. Samarajiwa

AbstractMore than three decades ago, the microarray revolution brought about high-throughput data generation capability to biology and medicine. Subsequently, the emergence of massively parallel sequencing technologies led to many big-data initiatives such as the human genome project and the encyclopedia of DNA elements (ENCODE) project. These, in combination with cheaper, faster massively parallel DNA sequencing capabilities, have democratised multi-omic (genomic, transcriptomic, translatomic and epigenomic) data generation leading to a data deluge in bio-medicine. While some of these data-sets are trapped in inaccessible silos, the vast majority of these data-sets are stored in public data resources and controlled access data repositories, enabling their wider use (or misuse). Currently, most peer reviewed publications require the deposition of the data-set associated with a study under consideration in one of these public data repositories. However, clunky and difficult to use interfaces, subpar or incomplete annotation prevent discovering, searching and filtering of these multi-omic data and hinder their re-purposing in other use cases. In addition, the proliferation of multitude of different data repositories, with partially redundant storage of similar data are yet another obstacle to their continued usefulness. Similarly, interfaces where annotation is spread across multiple web pages, use of accession identifiers with ambiguous and multiple interpretations and lack of good curation make these data-sets difficult to use. We have produced SpiderSeqR, an R package, whose main features include the integration between NCBI GEO and SRA databases, enabling an integrated unified search of SRA and GEO data-sets and associated annotations, conversion between database accessions, as well as convenient filtering of results and saving past queries for future use. All of the above features aim to promote data reuse to facilitate making new discoveries and maximising the potential of existing data-sets.Availabilityhttps://github.com/ss-lab-cancerunit/SpiderSeqR


Sign in / Sign up

Export Citation Format

Share Document