scholarly journals Transcriptomics in Toxicogenomics, Part III: Data Modelling for Risk Assessment

Nanomaterials ◽  
2020 ◽  
Vol 10 (4) ◽  
pp. 708 ◽  
Author(s):  
Angela Serra ◽  
Michele Fratello ◽  
Luca Cattelani ◽  
Irene Liampa ◽  
Georgia Melagraki ◽  
...  

Transcriptomics data are relevant to address a number of challenges in Toxicogenomics (TGx). After careful planning of exposure conditions and data preprocessing, the TGx data can be used in predictive toxicology, where more advanced modelling techniques are applied. The large volume of molecular profiles produced by omics-based technologies allows the development and application of artificial intelligence (AI) methods in TGx. Indeed, the publicly available omics datasets are constantly increasing together with a plethora of different methods that are made available to facilitate their analysis, interpretation and the generation of accurate and stable predictive models. In this review, we present the state-of-the-art of data modelling applied to transcriptomics data in TGx. We show how the benchmark dose (BMD) analysis can be applied to TGx data. We review read across and adverse outcome pathways (AOP) modelling methodologies. We discuss how network-based approaches can be successfully employed to clarify the mechanism of action (MOA) or specific biomarkers of exposure. We also describe the main AI methodologies applied to TGx data to create predictive classification and regression models and we address current challenges. Finally, we present a short description of deep learning (DL) and data integration methodologies applied in these contexts. Modelling of TGx data represents a valuable tool for more accurate chemical safety assessment. This review is the third part of a three-article series on Transcriptomics in Toxicogenomics.

2014 ◽  
Vol 36 (3) ◽  
pp. 19-25 ◽  
Author(s):  
Fiona Reynolds ◽  
Carl Westmoreland ◽  
Julia Fentem

New informatics capabilities and computational and mathematical modelling techniques, used in combination with highly sensitive molecular biology and mechanistic chemistry approaches, are transforming the way in which we assess the safety of chemicals and products. In recent years, good progress has been made in replacing some of the animal tests required for regulatory purposes with methods using cells and tissues in vitro. Nevertheless, big scientific challenges remain in developing relevant non-animal models able to predict the effects of chemicals which are absorbed systemically. The greatest breakthroughs in non-animal approaches for chemical safety assessment will most likely result from continued multi-disciplinary research investment in predictive (integrative and systems) biology. Some of our current research in this area is described in the present article.


1989 ◽  
Vol 4 (4) ◽  
pp. 205-215
Author(s):  
Daniel T. Lee

Traditional data modelling techniques of DSS and modern knowledge representation methodologies of ES are inconsistent. A new unifying model is needed for integrating the two systems into a unified whole. After a brief review of data modelling techniques and knowledge representation methodologies, the unifying model will be described and integrated systems will be used to exemplify the usefulness of the unifying model.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
F Madia ◽  
A Worth ◽  
M Whelan ◽  
R Corvi

Abstract The rising rates of cancer incidence and prevalence identified by the WHO are of serious concern. The scientific advances of the past twenty years have helped to describe major properties of the cancer disease, enabling therapies that are more sophisticated. It has become clear that the management of relevant risk factors can also significantly reduce cancer occurrence worldwide. Public health policy actions cannot be decoupled from environmental policy actions, since exposure to chemicals through air, soil, water and food can contribute to cancer as well as other chronic diseases. Furthermore, due to the increasing global trend of chemical production including novel compounds, chemical exposure patterns are foreseen to change, posing high demands on chemical safety assessment, and creating potential protection gaps. The safety assessment of carcinogenicity needs to evolve to keep pace with changes in the chemical environment and cancer epidemiology. The presentation focusses on EC-JRC recommendations and future strategies for carcinogenicity safety assessment. This also includes discussion on how the traditional data streams of regulatory toxicology, together with new available assessment methods can inform, along with indicators of public health status based on biomonitoring and clinical data, a more holistic human-relevant and impactful approach to carcinogenicity assessment and overall prevention of cancer disease.


2021 ◽  
pp. 126438
Author(s):  
Luana de Morais e Silva ◽  
Vinicius M. Alves ◽  
Edilma R.B. Dantas ◽  
Luciana Scotti ◽  
Wilton Silva Lopes ◽  
...  

2004 ◽  
Vol 1 (1) ◽  
pp. 131-142
Author(s):  
Ljupčo Todorovski ◽  
Sašo Džeroski ◽  
Peter Ljubič

Both equation discovery and regression methods aim at inducing models of numerical data. While the equation discovery methods are usually evaluated in terms of comprehensibility of the induced model, the emphasis of the regression methods evaluation is on their predictive accuracy. In this paper, we present Ciper, an efficient method for discovery of polynomial equations and empirically evaluate its predictive performance on standard regression tasks. The evaluation shows that polynomials compare favorably to linear and piecewise regression models, induced by the existing state-of-the-art regression methods, in terms of degree of fit and complexity.


2009 ◽  
Vol 142 (11) ◽  
pp. 2501-2509 ◽  
Author(s):  
Miia Parviainen ◽  
Mathieu Marmion ◽  
Miska Luoto ◽  
Wilfried Thuiller ◽  
Risto K. Heikkinen

2020 ◽  
Author(s):  
Thijs Dhollander ◽  
Adam Clemente ◽  
Mervyn Singh ◽  
Frederique Boonstra ◽  
Oren Civier ◽  
...  

Diffusion MRI has provided the neuroimaging community with a powerful tool to acquire in-vivo data sensitive to microstructural features of white matter, up to 3 orders of magnitude smaller than typical voxel sizes. The key to extracting such valuable information lies in complex modelling techniques, which form the link between the rich diffusion MRI data and various metrics related to the microstructural organisation. Over time, increasingly advanced techniques have been developed, up to the point where some diffusion MRI models can now provide access to properties specific to individual fibre populations in each voxel in the presence of multiple "crossing" fibre pathways. While highly valuable, such fibre-specific information poses unique challenges for typical image processing pipelines and statistical analysis. In this work, we review the "fixel-based analysis" (FBA) framework that implements bespoke solutions to this end, and has recently seen a stark increase in adoption for studies of both typical (healthy) populations as well as a wide range of clinical populations. We describe the main concepts related to fixel-based analyses, as well as the methods and specific steps involved in a state-of-the-art FBA pipeline, with a focus on providing researchers with practical advice on how to interpret results. We also include an overview of the scope of current fixel-based analysis studies (until August 2020), categorised across a broad range of neuroscientific domains, listing key design choices and summarising their main results and conclusions. Finally, we critically discuss several aspects and challenges involved with the fixel-based analysis framework, and outline some directions and future opportunities.


2000 ◽  
Vol 56 (3) ◽  
pp. 250-278 ◽  
Author(s):  
Kalervo Järvelin ◽  
Peter Ingwersen ◽  
Timo Niemi

This article presents a novel user‐oriented interface for generalised informetric analysis and demonstrates how informetric calculations can easily and declaratively be specified through advanced data modelling techniques. The interface is declarative and at a high level. Therefore it is easy to use, flexible and extensible. It enables end users to perform basic informetric ad hoc calculations easily and often with much less effort than in contemporary online retrieval systems. It also provides several fruitful generalisations of typical informetric measurements like impact factors. These are based on substituting traditional foci of analysis, for instance journals, by other object types, such as authors, organisations or countries. In the interface, bibliographic data are modelled as complex objects (non‐first normal form relations) and terminological and citation networks involving transitive relationships are modelled as binary relations for deductive processing. The interface is flexible, because it makes it easy to switch focus between various object types for informetric calculations, e.g. from authors to institutions. Moreover, it is demonstrated that all informetric data can easily be broken down by criteria that foster advanced analysis, e.g. by years or content‐bearing attributes. Such modelling allows flexible data aggregation along many dimensions. These salient features emerge from the query interface‘s general data restructuring and aggregation capabilities combined with transitive processing capabilities. The features are illustrated by means of sample queries and results in the article.


Sign in / Sign up

Export Citation Format

Share Document