scholarly journals Inferring Phylogenetic Networks from Gene Order Data

2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Alexey Anatolievich Morozov ◽  
Yuri Pavlovich Galachyants ◽  
Yelena Valentinovna Likhoshway

Existing algorithms allow us to infer phylogenetic networks from sequences (DNA, protein or binary), sets of trees, and distance matrices, but there are no methods to build them using the gene order data as an input. Here we describe several methods to build split networks from the gene order data, perform simulation studies, and use our methods for analyzing and interpreting different real gene order datasets. All proposed methods are based on intermediate data, which can be generated from genome structures under study and used as an input for network construction algorithms. Three intermediates are used: set of jackknife trees, distance matrix, and binary encoding. According to simulations and case studies, the best intermediates are jackknife trees and distance matrix (when used with Neighbor-Net algorithm). Binary encoding can also be useful, but only when the methods mentioned above cannot be used.

2020 ◽  
Author(s):  
Gustavo Cainelli ◽  
Max Feldman ◽  
Tiago Rodrigo Cruz ◽  
Ivan Muller ◽  
Carlos Eduardo Pereira

The use of industrial wireless networks has been growing continuously and it hasbecome an alternative to wired networks. One of the main elements of an industrial wireless network is the network manager, this component is responsible for tasks related to the network construction and maintenance. This work presents the development of a network manager compatible with the WirelessHART protocol, but also customizable, where it is possible to make modifications in order to carry out studies with this protocol. Case studies are presented where the developed tool was used for studies related to communications scheduling, adaptive channel mapping and fast data collection, thus proving the efficiency of the proposed manager.


2019 ◽  
Vol 29 (8) ◽  
pp. 2360-2389
Author(s):  
Jianping Yang ◽  
Pei-Fen Kuan ◽  
Jialiang Li

We propose a non-monotone transformation to biomarkers in order to improve the diagnostic and screening accuracy. The proposed quadratic transformation only involves modeling the distribution means and variances of the biomarkers and is therefore easy to implement in practice. Mathematical justification was rigorously established to support the validity of the proposed transformation. We conducted extensive simulation studies to assess the performance of the proposed method and compared the new method with the traditional methods. Case studies on real biomedical and epigenetics data were provided to illustrate the proposed transformation. In particular, the proposed method improved the AUC values for a large number of markers in a DNA methylation study and consequently led to the identification of greater number of important biomarkers and biologically meaningful genetic pathways.


PLoS ONE ◽  
2017 ◽  
Vol 12 (4) ◽  
pp. e0175876 ◽  
Author(s):  
Martin S. Zand ◽  
Melissa Trayhan ◽  
Samir A. Farooq ◽  
Christopher Fucile ◽  
Gourab Ghoshal ◽  
...  

2008 ◽  
Vol 11 (3) ◽  
pp. 279-287 ◽  
Author(s):  
Janneke Kloosterman ◽  
Martine I Bakker ◽  
Nynke de Jong ◽  
Marga C Ocké

AbstractObjectiveTo create a general framework for the simulation of intakes from mandatory or voluntary fortification, which will make outcomes of simulation studies more comparable and give insight on uncertainties.DesignA general framework was developed based on methods used in already published case studies of mandatory fortification. The framework was extended to be suitable for the simulation of voluntary fortification. Case studies of folic acid fortification were used to illustrate the general framework.ResultsThe developed framework consists of six steps. First, the definition of the fortification strategy (step 1), followed by the identification of potential carrier products (step 2), and the definition of fortification levels or ranges (step 3). Thereafter, virtual food/supplement composition data are created (step 4) and food/supplement consumption data are required (step 5). Finally, the intake of the functional ingredient from functional foods, other foods and dietary supplements is calculated during the simulation resulting in total habitual intake distributions (step 6).ConclusionsSimulation of both mandatory and voluntary folic acid fortification in The Netherlands showed that the general framework is applicable. Also with incomplete data or data from different sources, the (habitual) intake distributions can be estimated using assumptions, statistical procedures or probabilistic modelling approaches. It is important that the simulation procedure is described well, so that an insight on uncertainties and knowledge gaps to be filled is given.


2020 ◽  
Author(s):  
Hadi Poormohammadi ◽  
Mohsen Sardari Zarchi

AbstractPhylogenetic networks construction is one the most important challenge in phylogenetics. These networks can present complex non-treelike events such as gene flow, horizontal gene transfers, recombination or hybridizations. Among phylogenetic networks, rooted structures are commonly used to represent the evolutionary history of a species set, explicitly. Triplets are well known input for constructing the rooted networks. Obtaining an optimal rooted network that contains all given triplets is main problem in network construction. The optimality criteria include minimizing the level and the number of reticulation nodes. The complexity of this problem is known to be NP-hard. In this research, a new algorithm called Netcombin is introduced to construct an optimal network which is consistent with input triplets. The innovation of this algorithm is based on binarization and expanding processes. The binarization process innovatively uses a measure to construct a binary rooted tree T consistent with the maximum number of input triplets. Then T is expanded in an intellectual process by adding minimum number of edges to obtain final network with the minimum number of reticulation nodes. In order to evaluate the proposed algorithm, NetCombin is compared with four state of the art algorithms, RPNCH, NCHB, TripNet, and SIMPLISTIC. The experimental results on real data indicate that by considering the trade-off between speed and precision, the NetCombin outperforms the others.Author summaryHadi Poormohammadi got his PhD in Mathematics, Applied combinatorics from Shahid Beheshti University, Tehran, Iran in 2013. He is now working as an assistant professor at the Faculty of Computer Engineering, Meybod University. His research interests include Combinatorics, Graph theory and Bioinformatics.Mohsen Sardari Zarchi got his PhD in computer engineering, Artificial Intelligence from University of Isfahan in 2015. He is now working as an assistant professor at the Faculty of Computer Engineering, Meybod University. His research interests include Deep learning, Image processing, Artificial intelligence and Bioinformatics.


2014 ◽  
Vol 12 (05) ◽  
pp. 1450024 ◽  
Author(s):  
Matthieu Willems ◽  
Nadia Tahiri ◽  
Vladimir Makarenkov

Several algorithms and software have been developed for inferring phylogenetic trees. However, there exist some biological phenomena such as hybridization, recombination, or horizontal gene transfer which cannot be represented by a tree topology. We need to use phylogenetic networks to adequately represent these important evolutionary mechanisms. In this article, we present a new efficient heuristic algorithm for inferring hybridization networks from evolutionary distance matrices between species. The famous Neighbor-Joining concept and the least-squares criterion are used for building networks. At each step of the algorithm, before joining two given nodes, we check if a hybridization event could be related to one of them or to both of them. The proposed algorithm finds the exact tree solution when the considered distance matrix is a tree metric (i.e. it is representable by a unique phylogenetic tree). It also provides very good hybrids recovery rates for large trees (with 32 and 64 leaves in our simulations) for both distance and sequence types of data. The results yielded by the new algorithm for real and simulated datasets are illustrated and discussed in detail.


2006 ◽  
Vol 04 (04) ◽  
pp. 807-832 ◽  
Author(s):  
HO-LEUNG CHAN ◽  
JESPER JANSSON ◽  
TAK-WAH LAM ◽  
SIU-MING YIU

Given a distance matrix M that specifies the pairwise evolutionary distances between n species, the phylogenetic tree reconstruction problem asks for an edge-weighted phylogenetic tree that satisfies M, if one exists. We study some extensions of this problem to rooted phylogenetic networks. Our main result is an O(n2 log n)-time algorithm for determining whether there is an ultrametric galled network that satisfies M, and if so, constructing one. In fact, if such an ultrametric galled network exists, our algorithm is guaranteed to construct one containing the minimum possible number of nodes with more than one parent (hybrid nodes). We also prove that finding a largest possible submatrix M′ of M such that there exists an ultrametric galled network that satisfies M′ is NP-hard. Furthermore, we show that given an incomplete distance matrix (i.e. where some matrix entries are missing), it is also NP-hard to determine whether there exists an ultrametric galled network which satisfies it.


2018 ◽  
Author(s):  
Satoshi Usami ◽  
Naoya Todo ◽  
Kou Murayama

Longitudinal designs provide a strong inferential basis for uncovering reciprocal effects or causality between variables. For this analytic purpose, a cross-lagged panel model (CLPM) has been widely used in medical research, but the use of the CLPM has recently been criticized in methodological literature because parameter estimates in the CLPM conflate between-person and within-person processes. The aim of this study is to present some alternative models of the CLPM that can be used to examine reciprocal effects, and to illustrate potential consequences of ignoring the issue. A literature search, case studies, and simulation studies are used for this. We examined more than 300 medical papers published since 2009 that applied cross-lagged longitudinal models, finding that in all studies only a single model (typically, the CLPM) was performed and potential alternative models were not considered to test reciprocal effects. In 49% of the studies, only two time points were used, which makes it impossible to test such alternative models. Case studies and simulation studies showed that the CLPM often has worse model fit and markedly different estimates of cross-lagged parameters than alternative models, suggesting that research that relies on the CLPM only may draw erroneous conclusions regarding the presence, predominance, and sign of reciprocal effects as well as about causality.


2017 ◽  
Vol 24 (s1) ◽  
pp. 46-52 ◽  
Author(s):  
Mariusz Deja ◽  
Michał Dobrzyński ◽  
Mieczysław S. Siemiątkowski ◽  
Aleksandra Wiśniewska

AbstractThe focus of the paper is in quayside transport and storage yard operations in container terminals. Relevant algorithms have been applied and a simulation model adopted. Evaluative criteria chosen for that model were: the total time of ship unloading and the truck utilization level. Recommendations for unloading in berth and yard areas were analysed in three different case studies. Results of simulations and deterministic model based analyses are included.


Sign in / Sign up

Export Citation Format

Share Document