scholarly journals On Finding Two Posets that Cover Given Linear Orders

Algorithms ◽  
2019 ◽  
Vol 12 (10) ◽  
pp. 219
Author(s):  
Ivy Ordanel ◽  
Proceso Fernandez ◽  
Henry Adorna

The Poset Cover Problem is an optimization problem where the goal is to determine a minimum set of posets that covers a given set of linear orders. This problem is relevant in the field of data mining, specifically in determining directed networks or models that explain the ordering of objects in a large sequential dataset. It is already known that the decision version of the problem is NP-Hard while its variation where the goal is to determine only a single poset that covers the input is in P. In this study, we investigate the variation, which we call the 2-Poset Cover Problem, where the goal is to determine two posets, if they exist, that cover the given linear orders. We derive properties on posets, which leads to an exact solution for the 2-Poset Cover Problem. Although the algorithm runs in exponential-time, it is still significantly faster than a brute-force solution. Moreover, we show that when the posets being considered are tree-posets, the running-time of the algorithm becomes polynomial, which proves that the more restricted variation, which we called the 2-Tree-Poset Cover Problem, is also in P.

2007 ◽  
Vol 7 (1) ◽  
pp. 25-47 ◽  
Author(s):  
I.P. Gavrilyuk ◽  
M. Hermann ◽  
M.V. Kutniv ◽  
V.L. Makarov

Abstract The scalar boundary value problem (BVP) for a nonlinear second order differential equation on the semiaxis is considered. Under some natural assumptions it is shown that on an arbitrary finite grid there exists a unique three-point exact difference scheme (EDS), i.e., a difference scheme whose solution coincides with the projection of the exact solution of the given differential equation onto the underlying grid. A constructive method is proposed to derive from the EDS a so-called truncated difference scheme (n-TDS) of rank n, where n is a freely selectable natural number. The n-TDS is the basis for a new adaptive algorithm which has all the advantages known from the modern IVP-solvers. Numerical examples are given which illustrate the theorems presented in the paper and demonstrate the reliability of the new algorithm.


Author(s):  
R. Jamuna

CpG islands (CGIs) play a vital role in genome analysis as genomic markers.  Identification of the CpG pair has contributed not only to the prediction of promoters but also to the understanding of the epigenetic causes of cancer. In the human genome [1] wherever the dinucleotides CG occurs the C nucleotide (cytosine) undergoes chemical modifications. There is a relatively high probability of this modification that mutates C into a T. For biologically important reasons the mutation modification process is suppressed in short stretches of the genome, such as ‘start’ regions. In these regions [2] predominant CpG dinucleotides are found than elsewhere. Such regions are called CpG islands. DNA methylation is an effective means by which gene expression is silenced. In normal cells, DNA methylation functions to prevent the expression of imprinted and inactive X chromosome genes. In cancerous cells, DNA methylation inactivates tumor-suppressor genes, as well as DNA repair genes, can disrupt cell-cycle regulation. The most current methods for identifying CGIs suffered from various limitations and involved a lot of human interventions. This paper gives an easy searching technique with data mining of Markov Chain in genes. Markov chain model has been applied to study the probability of occurrence of C-G pair in the given   gene sequence. Maximum Likelihood estimators for the transition probabilities for each model and analgously for the  model has been developed and log odds ratio that is calculated estimates the presence or absence of CpG is lands in the given gene which brings in many  facts for the cancer detection in human genome.


Author(s):  
Shu-Qiang Wang ◽  
Ji-Huan He

An extremely simple and elementary, but rigorous derivation of temperature distribution of a reaction-diffusion process is given using the variational iteration method. In this method, a trial function (an initial solution) is chosen with some unknown parameter, which is identified after a few iterations according to the given boundary conditions. Comparison with the exact solution shows that the method is very effective and convenient.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257958
Author(s):  
Miguel Navascués ◽  
Costantino Budroni ◽  
Yelena Guryanova

In the context of epidemiology, policies for disease control are often devised through a mixture of intuition and brute-force, whereby the set of logically conceivable policies is narrowed down to a small family described by a few parameters, following which linearization or grid search is used to identify the optimal policy within the set. This scheme runs the risk of leaving out more complex (and perhaps counter-intuitive) policies for disease control that could tackle the disease more efficiently. In this article, we use techniques from convex optimization theory and machine learning to conduct optimizations over disease policies described by hundreds of parameters. In contrast to past approaches for policy optimization based on control theory, our framework can deal with arbitrary uncertainties on the initial conditions and model parameters controlling the spread of the disease, and stochastic models. In addition, our methods allow for optimization over policies which remain constant over weekly periods, specified by either continuous or discrete (e.g.: lockdown on/off) government measures. We illustrate our approach by minimizing the total time required to eradicate COVID-19 within the Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler et al. (March, 2020).


2015 ◽  
Vol 14 (06) ◽  
pp. 1215-1242 ◽  
Author(s):  
Chun-Hao Chen ◽  
Tzung-Pei Hong ◽  
Yeong-Chyi Lee ◽  
Vincent S. Tseng

Since transactions may contain quantitative values, many approaches have been proposed to derive membership functions for mining fuzzy association rules using genetic algorithms (GAs), a process known as genetic-fuzzy data mining. However, existing approaches assume that the number of linguistic terms is predefined. Thus, this study proposes a genetic-fuzzy mining approach for extracting an appropriate number of linguistic terms and their membership functions used in fuzzy data mining for the given items. The proposed algorithm adjusts membership functions using GAs and then uses them to fuzzify the quantitative transactions. Each individual in the population represents a possible set of membership functions for the items and is divided into two parts, control genes (CGs) and parametric genes (PGs). CGs are encoded into binary strings and used to determine whether membership functions are active. Each set of membership functions for an item is encoded as PGs with real-number schema. In addition, seven fitness functions are proposed, each of which is used to evaluate the goodness of the obtained membership functions and used as the evolutionary criteria in GA. After the GA process terminates, a better set of association rules with a suitable set of membership functions is obtained. Experiments are made to show the effectiveness of the proposed approach.


2011 ◽  
pp. 233-255
Author(s):  
Stefano De Luca ◽  
Enrico Memo

The expenses in Health Care are an important portion of the overall expenses of every country, so it is very important to determine if the given cares are the right ones. This work is about a methodology, Health Discoverer, and a consequent software, aimed to disease management and to the measure of appropriateness of cares, and in particular is about the data mining techniques used to verify Clinical Practice Guidelines (CPGs) compliance and the discovery of new, better guidelines. The work is based on Quality Records, episode parsing using Ontologies and Hidden Markov Models.


Author(s):  
Yanbo J. Wang ◽  
Xinwei Zheng ◽  
Frans Coenen

An association rule (AR) is a common type of mined knowledge in data mining that describes an implicative co-occurring relationship between two sets of binary-valued transaction-database attributes, expressed in the form of an ? rule. A variation of ARs is the (WARs), which addresses the weighting issue in ARs. In this chapter, the authors introduce the concept of “one-sum” WAR and name such WARs as allocating patterns (ALPs). An algorithm is proposed to extract hidden and interesting ALPs from data. The authors further indicate that ALPs can be applied in portfolio management. Firstly by modelling a collection of investment portfolios as a one-sum weighted transaction- database that contains hidden ALPs. Secondly the authors show that ALPs, mined from the given portfolio-data, can be applied to guide future investment activities. The experimental results show good performance that demonstrates the effectiveness of using ALPs in the proposed application.


Author(s):  
Endre Boros ◽  
Peter L. Hammer ◽  
Toshihide Ibaraki

The logical analysis of data (LAD) is a methodology aimed at extracting or discovering knowledge from data in logical form. The first paper in this area was published as Crama, Hammer, & Ibaraki (1988) and precedes most of the data mining papers appearing in the 1990s. Its primary target is a set of binary data belonging to two classes for which a Boolean function that classifies the data into two classes is built. In other words, the extracted knowledge is embodied as a Boolean function, which then will be used to classify unknown data. As Boolean functions that classify the given data into two classes are not unique, there are various methodologies investigated in LAD to obtain compact and meaningful functions. As will be mentioned later, numerical and categorical data also can be handled, and more than two classes can be represented by combining more than one Boolean function.


2014 ◽  
Vol 721 ◽  
pp. 543-546 ◽  
Author(s):  
Dong Juan Gu ◽  
Lei Xia

Apriori algorithm is the classical algorithm in data mining association rules. Because the Apriori algorithm needs scan database for many times, it runs too slowly. In order to improve the running efficiency, this paper improves the Apriori algorithm based on the Apriori analysis. The improved idea is that it transforms the transaction database into corresponding 0-1 matrix. Whose each vector and subsequent vector does inner product operation to receive support. And comparing with the given minsupport, the rows and columns will be deleted if vector are less than the minsupport, so as to reduce the size of the rating matrix, improve the running speeding. Because the improved algorithm only needs to scan the database once when running, therefore the running speeding is more quickly. The experiment also shows that this improved algorithm is efficient and feasible.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Da-Wei Jin ◽  
Li-Ning Xing

The multiple satellites mission planning is a complex combination optimization problem. A knowledge-based simulated annealing algorithm is proposed to the multiple satellites mission planning problems. The experimental results suggest that the proposed algorithm is effective to the given problem. The knowledge-based simulated annealing method will provide a useful reference for the improvement of existing optimization approaches.


Sign in / Sign up

Export Citation Format

Share Document