scholarly journals Investigating Global Lipidome Alterations with the Lipid Network Explorer

Metabolites ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 488
Author(s):  
Nikolai Köhler ◽  
Tim Daniel Rose ◽  
Lisa Falk ◽  
Josch Konstantin Pauling

Lipids play an important role in biological systems and have the potential to serve as biomarkers in medical applications. Advances in lipidomics allow identification of hundreds of lipid species from biological samples. However, a systems biological analysis of the lipidome, by incorporating pathway information remains challenging, leaving lipidomics behind compared to other omics disciplines. An especially uncharted territory is the integration of statistical and network-based approaches for studying global lipidome changes. Here we developed the Lipid Network Explorer (LINEX), a web-tool addressing this gap by providing a way to visualize and analyze functional lipid metabolic networks. It utilizes metabolic rules to match biochemically connected lipids on a species level and combine it with a statistical correlation and testing analysis. Researchers can customize the biochemical rules considered, to their tissue or organism specific analysis and easily share them. We demonstrate the benefits of combining network-based analyses with statistics using publicly available lipidomics data sets. LINEX facilitates a biochemical knowledge-based data analysis for lipidomics. It is availableas a web-application and as a publicly available docker container.

2019 ◽  
Author(s):  
Bruno Savelli ◽  
Sylvain Picard ◽  
Christophe Roux ◽  
Christophe Dunand

ABSTRACTThe recent explosion of transcriptomics and proteomic data have resulted in vast amounts of datasets without connection and sometime too large to be easily analysed. Integration between datasets and analysis of an extracted datasets are limiting factors which need to be solved in order to make full use of the data and to connect data.ExpressWeb is an online web tool that combines a Taylor clustering of expressed data sets to extract gene network with gene annotations to visualise the co-expression network. Data sets can become from personal or publically experiments. ExpressWeb allows to easily compute clustering on expression data and provides friendly and useful visualisation tools as heatmaps, graphs and networks, generating output images which can be used for scientific publications.


2021 ◽  
pp. 016555152199863
Author(s):  
Ismael Vázquez ◽  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
José Ramón Méndez ◽  
...  

Current research has evolved in such a way scientists must not only adequately describe the algorithms they introduce and the results of their application, but also ensure the possibility of reproducing the results and comparing them with those obtained through other approximations. In this context, public data sets (sometimes shared through repositories) are one of the most important elements for the development of experimental protocols and test benches. This study has analysed a significant number of CS/ML ( Computer Science/ Machine Learning) research data repositories and data sets and detected some limitations that hamper their utility. Particularly, we identify and discuss the following demanding functionalities for repositories: (1) building customised data sets for specific research tasks, (2) facilitating the comparison of different techniques using dissimilar pre-processing methods, (3) ensuring the availability of software applications to reproduce the pre-processing steps without using the repository functionalities and (4) providing protection mechanisms for licencing issues and user rights. To show the introduced functionality, we created STRep (Spam Text Repository) web application which implements our recommendations adapted to the field of spam text repositories. In addition, we launched an instance of STRep in the URL https://rdata.4spam.group to facilitate understanding of this study.


GigaScience ◽  
2020 ◽  
Vol 9 (1) ◽  
Author(s):  
T Cameron Waller ◽  
Jordan A Berg ◽  
Alexander Lex ◽  
Brian E Chapman ◽  
Jared Rutter

Abstract Background Metabolic networks represent all chemical reactions that occur between molecular metabolites in an organism’s cells. They offer biological context in which to integrate, analyze, and interpret omic measurements, but their large scale and extensive connectivity present unique challenges. While it is practical to simplify these networks by placing constraints on compartments and hubs, it is unclear how these simplifications alter the structure of metabolic networks and the interpretation of metabolomic experiments. Results We curated and adapted the latest systemic model of human metabolism and developed customizable tools to define metabolic networks with and without compartmentalization in subcellular organelles and with or without inclusion of prolific metabolite hubs. Compartmentalization made networks larger, less dense, and more modular, whereas hubs made networks larger, more dense, and less modular. When present, these hubs also dominated shortest paths in the network, yet their exclusion exposed the subtler prominence of other metabolites that are typically more relevant to metabolomic experiments. We applied the non-compartmental network without metabolite hubs in a retrospective, exploratory analysis of metabolomic measurements from 5 studies on human tissues. Network clusters identified individual reactions that might experience differential regulation between experimental conditions, several of which were not apparent in the original publications. Conclusions Exclusion of specific metabolite hubs exposes modularity in both compartmental and non-compartmental metabolic networks, improving detection of relevant clusters in omic measurements. Better computational detection of metabolic network clusters in large data sets has potential to identify differential regulation of individual genes, transcripts, and proteins.


2014 ◽  
Vol 102 (1) ◽  
pp. 69-80 ◽  
Author(s):  
Torregrosa Daniel ◽  
Forcada Mikel L. ◽  
Pérez-Ortiz Juan Antonio

Abstract We present a web-based open-source tool for interactive translation prediction (ITP) and describe its underlying architecture. ITP systems assist human translators by making context-based computer-generated suggestions as they type. Most of the ITP systems in literature are strongly coupled with a statistical machine translation system that is conveniently adapted to provide the suggestions. Our system, however, follows a resource-agnostic approach and suggestions are obtained from any unmodified black-box bilingual resource. This paper reviews our ITP method and describes the architecture of Forecat, a web tool, partly based on the recent technology of web components, that eases the use of our ITP approach in any web application requiring this kind of translation assistance. We also evaluate the performance of our method when using an unmodified Moses-based statistical machine translation system as the bilingual resource.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. V141-V150 ◽  
Author(s):  
Emanuele Forte ◽  
Matteo Dossi ◽  
Michele Pipan ◽  
Anna Del Ben

We have applied an attribute-based autopicking algorithm to reflection seismics with the aim of reducing the influence of the user’s subjectivity on the picking results and making the interpretation faster with respect to manual and semiautomated techniques. Our picking procedure uses the cosine of the instantaneous phase to automatically detect and mark as a horizon any recorded event characterized by lateral phase continuity. A patching procedure, which exploits horizon parallelism, can be used to connect consecutive horizons marking the same event but separated by noise-related gaps. The picking process marks all coherent events regardless of their reflection strength; therefore, a large number of independent horizons can be constructed. To facilitate interpretation, horizons marking different phases of the same reflection can be automatically grouped together and specific horizons from each reflection can be selected using different possible methods. In the phase method, the algorithm reconstructs the reflected wavelets by averaging the cosine of the instantaneous phase along each horizon. The resulting wavelets are then locally analyzed and confronted through crosscorrelation, allowing the recognition and selection of specific reflection phases. In case the reflected wavelets cannot be recovered due to shape-altering processing or a low signal-to-noise ratio, the energy method uses the reflection strength to group together subparallel horizons within the same energy package and to select those satisfying either energy or arrival time criteria. These methods can be applied automatically to all the picked horizons or to horizons individually selected by the interpreter for specific analysis. We show examples of application to 2D reflection seismic data sets in complex geologic and stratigraphic conditions, critically reviewing the performance of the whole process.


2016 ◽  
Author(s):  
Stephen G. Gaffney ◽  
Jeffrey P. Townsend

ABSTRACTSummaryPathScore quantifies the level of enrichment of somatic mutations within curated pathways, applying a novel approach that identifies pathways enriched across patients. The application provides several user-friendly, interactive graphic interfaces for data exploration, including tools for comparing pathway effect sizes, significance, gene-set overlap and enrichment differences between projects.Availability and ImplementationWeb application available at pathscore.publichealth.yale.edu. Site implemented in Python and MySQL, with all major browsers supported. Source code available at github.com/sggaffney/pathscore with a GPLv3 [email protected] InformationAdditional documentation can be found at http://pathscore.publichealth.yale.edu/faq.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 168
Author(s):  
Yonglak SHON ◽  
Jaeyoung PARK ◽  
Jangmook KANG ◽  
Sangwon LEE

The LOD data sets consist of RDF Triples based on the Ontology, a specification of existing facts, and by linking them to previously disclosed knowledge based on linked data principles. These structured LOD clouds form a large global data network, which provides a more accurate foundation for users to deliver the desired information. However, it is difficult to identify that, if the presence of the same object is identified differently across several LOD data sets, they are inherently identical. This is because objects with different URIs in the LOD datasets must be different and they must be closely examined for similarities in order to judge them as identical. The aim of this study is that the prosed model, RILE, evaluates similarity by comparing object values of existing specified predicates. After performing experiments with our model, we could check the improvement of the confidence level of the connection by extracting the link value.  


2020 ◽  
Author(s):  
Annika Tjuka ◽  
Robert Forkel ◽  
Johann-Mattis List

Psychologists and linguists have collected a great diversity of data for word and concept properties. In psychology, many studies accumulate norms and ratings such as word frequencies or age-of-acquisition often for a large number of words. Linguistics, on the other hand, provides valuable insights into relations of word meanings. We present a collection of those data sets for norms, ratings, and relations that cover different languages: ‘NoRaRe.’ To enable a comparison between the diverse data types, we established workflows that facilitate the expansion of the database. A web application allows convenient access to the data (https://digling.org/norare/). Furthermore, a software API ensures consistent data curation by providing tests to validate the data sets. The NoRaRe collection is linked to the database curated by the Concepticon project (https://concepticon.clld.org) which offers a reference catalog of unified concept sets. The link between words in the data sets and the Concepticon concept sets makes a cross-linguistic comparison possible. In three case studies, we test the validity of our approach, the accuracy of our workflow, and the applicability of our database. The results indicate that the NoRaRe database can be applied for the study of word properties across multiple languages. The data can be used by psychologists and linguists to benefit from the knowledge rooted in both research disciplines.


Author(s):  
Amey Thakur

The project's main goal is to build an online book store where users can search for and buy books based on title, author, and subject. The chosen books are shown in a tabular style and the customer may buy them online using a credit card. Using this Website, the user may buy a book online rather than going to a bookshop and spending time. Many online bookstores, such as Powell's and Amazon, were created using HTML. We suggest creating a comparable website with .NET and SQL Server. An online book store is a web application that allows customers to purchase ebooks. Through a web browser the customers can search for a book by its title or author, later can add it to the shopping cart and finally purchase using a credit card transaction. The client may sign in using his login credentials, or new clients can simply open an account. Customers must submit their full name, contact details, and shipping address. The user may also provide a review of a book by rating it on a scale of one to five. The books are classified into different types depending on their subject matter, such as software, databases, English, and architecture. Customers can shop online at the Online Book Store Website using a web browser. A client may create an account, sign in, add things to his shopping basket, and buy the product using his credit card information. As opposed to a frequent user, the Administrator has more abilities. He has the ability to add, delete, and edit book details, book categories, and member information, as well as confirm a placed order. This application was created with PHP and web programming languages. The Online Book Store is built using the Master page, data sets, data grids, and user controls.


Sign in / Sign up

Export Citation Format

Share Document