Building and analysing genome-scale metabolic models

2010 ◽  
Vol 38 (5) ◽  
pp. 1197-1201 ◽  
Author(s):  
David A. Fell ◽  
Mark G. Poolman ◽  
Albert Gevorgyan

Reconstructing a model of the metabolic network of an organism from its annotated genome sequence would seem, at first sight, to be one of the most straightforward tasks in functional genomics, even if the various data sources required were never designed with this application in mind. The number of genome-scale metabolic models is, however, lagging far behind the number of sequenced genomes and is likely to continue to do so unless the model-building process can be accelerated. Two aspects that could usefully be improved are the ability to find the sources of error in a nascent model rapidly, and the generation of tenable hypotheses concerning solutions that would improve a model. We will illustrate these issues with approaches we have developed in the course of building metabolic models of Streptococcus agalactiae and Arabidopsis thaliana.

2015 ◽  
Vol 31 (3) ◽  
pp. 475-487 ◽  
Author(s):  
John R. Bryant ◽  
Patrick Graham

Abstract The article describes a Bayesian approach to deriving population estimates from multiple administrative data sources. Coverage rates play an important role in the approach: identifying anomalies in coverage rates is a key step in the model-building process, and data sources receive more weight within the model if their coverage rates are more consistent. Random variation in population processes and measurement processes is dealt with naturally within the model, and all outputs come with measures of uncertainty. The model is applied to the problem of estimating regional populations in New Zealand. The New Zealand example illustrates the continuing importance of coverage surveys.


2022 ◽  
Author(s):  
Javad Zamani ◽  
Sayed-Amir Marashi ◽  
Tahmineh Lohrasebi ◽  
Mohammad-Ali Malboobi ◽  
Esmail Foroozan

Genome-scale metabolic models (GSMMs) have enabled researchers to perform systems-level studies of living organisms. As a constraint-based technique, flux balance analysis (FBA) aids computation of reaction fluxes and prediction of...


AI Magazine ◽  
2016 ◽  
Vol 37 (2) ◽  
pp. 19-32 ◽  
Author(s):  
Sasin Janpuangtong ◽  
Dylan A. Shell

The infrastructure and tools necessary for large-scale data analytics, formerly the exclusive purview of experts, are increasingly available. Whereas a knowledgeable data-miner or domain expert can rightly be expected to exercise caution when required (for example, around fallacious conclusions supposedly supported by the data), the nonexpert may benefit from some judicious assistance. This article describes an end-to-end learning framework that allows a novice to create models from data easily by helping structure the model building process and capturing extended aspects of domain knowledge. By treating the whole modeling process interactively and exploiting high-level knowledge in the form of an ontology, the framework is able to aid the user in a number of ways, including in helping to avoid pitfalls such as data dredging. Prudence must be exercised to avoid these hazards as certain conclusions may only be supported if, for example, there is extra knowledge which gives reason to trust a narrower set of hypotheses. This article adopts the solution of using higher-level knowledge to allow this sort of domain knowledge to be used automatically, selecting relevant input attributes, and thence constraining the hypothesis space. We describe how the framework automatically exploits structured knowledge in an ontology to identify relevant concepts, and how a data extraction component can make use of online data sources to find measurements of those concepts so that their relevance can be evaluated. To validate our approach, models of four different problem domains were built using our implementation of the framework. Prediction error on unseen examples of these models show that our framework, making use of the ontology, helps to improve model generalization.


2015 ◽  
Vol 1 ◽  
pp. e31 ◽  
Author(s):  
Daniel J.A. Hills ◽  
Adrian M. Grütter ◽  
Jonathan J. Hudson

An activity fundamental to science is building mathematical models. These models are used to both predict the results of future experiments and gain insight into the structure of the system under study. We present an algorithm that automates the model building process in a scientifically principled way. The algorithm can take observed trajectories from a wide variety of mechanical systems and, without any other prior knowledge or tuning of parameters, predict the future evolution of the system. It does this by applying the principle of least action and searching for the simplest Lagrangian that describes the system’s behaviour. By generating this Lagrangian in a human interpretable form, it can also provide insight into the workings of the system.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 4894
Author(s):  
Jakub Janus ◽  
Jerzy Krawczyk

Research work on the air flow in mine workings frequently utilises computer techniques in the form of numeric simulations. However, it is very often necessary to apply simplifications when building a geometrical model. The assumption of constant model geometry on its entire length is one of the most frequent simplifications. This results in a substantial shortening of the geometrical model building process, and a concomitant shortening of the time of numerical computations; however, it is not known to what extent such simplifications worsen the accuracy of simulation results. The paper presents a new methodology that enables precise reproduction of the studied mine gallery and the obtaining of a satisfactory match between simulation results and in-situ measurements. It utilises the processing of data from laser scanning of a mine gallery, simultaneous multi-point measurements of the velocity field at selected gallery cross-sections, unique for mine conditions, and the SAS turbulence model, recently introduced to engineering analyses of flow issues.


2020 ◽  
Vol 49 (D1) ◽  
pp. D570-D574
Author(s):  
Sébastien Moretti ◽  
Van Du T Tran ◽  
Florence Mehl ◽  
Mark Ibberson ◽  
Marco Pagni

Abstract MetaNetX/MNXref is a reconciliation of metabolites and biochemical reactions providing cross-links between major public biochemistry and Genome-Scale Metabolic Network (GSMN) databases. The new release brings several improvements with respect to the quality of the reconciliation, with particular attention dedicated to preserving the intrinsic properties of GSMN models. The MetaNetX website (https://www.metanetx.org/) provides access to the full database and online services. A major improvement is for mapping of user-provided GSMNs to MXNref, which now provides diagnostic messages about model content. In addition to the website and flat files, the resource can now be accessed through a SPARQL endpoint (https://rdf.metanetx.org).


Author(s):  
Cheng Chen ◽  
Xiaobo Zhong ◽  
Jun Xiao ◽  
Yong Zhu ◽  
Jiao Jiang

Safe and efficient operation of a power plant is the system designers’ target. Regenerative system improves the Rankine Cycle efficiency of a power station. However, it is quite difficult to monitor the regenerative system’s performance in an accurate, economical and real-time way at any operation load. There are two main problems about this. One is that most model based on numerical and statistics approaches cannot be explained by the actual operation mechanism of the actual process. The other is that most mechanism models in the past could not be used to monitor the system performance accurately at real-time. This paper focuses on solving these two problems and finds a better way to monitor the regenerative system’s performance accurately in a real-time by the analysis of the mechanism models and numerical methods. It is called the dominant factor method. Two important parameters (characteristic parameter and dominant factor) and characteristic functions are introduced in this paper. Also, this paper described the analysis process and the model building process. In the paper, the mathematics model building process is based on a 1000MW unit’s regenerative system. Characteristic functions are built based on the specific operating data of the power unit. Combing the general mechanism model and the characteristic function together, this paper builds up a regenerative system off-design mathematical model. First, this paper proved the model accuracy by computer simulation. Then, the models were used to predict the pressure of the piping outlet, the temperature of the outlet feedwater and drain water of heaters in a real-time by computers. The results show that the deviation rate between the theoretical predictions and the actual operation data is less than 0.25% during the whole operation load range. At last, in order to test the fault identification ability of this model, some real tests were done in this 1000MW power plant during the actual operation period. The performance changes are identified via the difference between the predict value and the real time value. The result of the tests shows that the performance’s gradient change and sudden change could be found by the model result easily. In order to verify the adaptability of the model, it was used for another 300MW unit, and done some operation test. The results show that this method can also be used for the 300MW unit’s regenerative system. And it can help the operator to recognize the fault heater. The results of this paper proved that the dominant factor method is feasible for performance monitoring of the regenerative system. It can be used to monitor and find the fault of the regenerative system at any operation load by an accurate and fast way in real-time.


Sign in / Sign up

Export Citation Format

Share Document