Comparing node degrees in probabilistic networks

2019 ◽  
Vol 7 (5) ◽  
pp. 749-763 ◽  
Author(s):  
Amin Kaveh ◽  
Matteo Magnani ◽  
Christian Rohner

Abstract Degree is a fundamental property of nodes in networks. However, computing the degree distribution of nodes in probabilistic networks is an expensive task for large networks. To overcome this difficulty, expected degree is commonly utilized in the literature. However, in this article, we show that in some cases expected degree does not allow us to evaluate the probability of two nodes having the same degree or one node having higher degree than another. This suggests that expected degree in probabilistic networks does not completely play the same role as degree in deterministic networks. For each node, we define a reference node with the same expected degree but the least possible variance, corresponding to the least uncertain degree distribution. Then, we show how the probability of a node’s degree being higher or equal to the degree of its reference node can be approximated by using variance and skewness of the degree distribution in addition to expected degree. Experimental results on a real dataset show that our approximation functions produce accurate probability estimations in linear computational complexity, while computing exact probabilities is polynomial with order of 3.

2021 ◽  
Vol 27 (11) ◽  
pp. 563-574
Author(s):  
V. V. Kureychik ◽  
◽  
S. I. Rodzin ◽  

Computational models of bio heuristics based on physical and cognitive processes are presented. Data on such characteristics of bio heuristics (including evolutionary and swarm bio heuristics) are compared.) such as the rate of convergence, computational complexity, the required amount of memory, the configuration of the algorithm parameters, the difficulties of software implementation. The balance between the convergence rate of bio heuristics and the diversification of the search space for solutions to optimization problems is estimated. Experimental results are presented for the problem of placing Peco graphs in a lattice with the minimum total length of the graph edges.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Ziqi Jia ◽  
Ling Song

The k-prototypes algorithm is a hybrid clustering algorithm that can process Categorical Data and Numerical Data. In this study, the method of initial Cluster Center selection was improved and a new Hybrid Dissimilarity Coefficient was proposed. Based on the proposed Hybrid Dissimilarity Coefficient, a weighted k-prototype clustering algorithm based on the hybrid dissimilarity coefficient was proposed (WKPCA). The proposed WKPCA algorithm not only improves the selection of initial Cluster Centers, but also puts a new method to calculate the dissimilarity between data objects and Cluster Centers. The real dataset of UCI was used to test the WKPCA algorithm. Experimental results show that WKPCA algorithm is more efficient and robust than other k-prototypes algorithms.


2011 ◽  
Vol 382 ◽  
pp. 418-421
Author(s):  
Dong Yan Cui ◽  
Zai Xing Xie

This paper presents an automatic program to track in moving objects, using segmentation algorithm quickly and efficiently after the division of a moving object, in the follow-up frame through the establishment of inter-frame vectors to track moving objects of interest. Experimental results show that the algorithm can accurately and effectively track moving objects of interest, and because the algorithm is simple, the computational complexity is small, can be well positioned to meet real-time monitoring system in the extraction of moving objects of interest and tracking needs.


2014 ◽  
Vol 998-999 ◽  
pp. 1169-1173
Author(s):  
Chang Lin He ◽  
Yu Fen Li ◽  
Lei Zhang

A improved genetic algorithm is proposed to QoS routing optimization. By improving coding schemes, fitness function designs, selection schemes, crossover schemes and variations, the proposed method can effectively reduce computational complexity and improve coding accuracy. Simulations are carried out to compare our algorithm with the traditional genetic algorithms. Experimental results show that our algorithm converges quickly and is reliable. Hence, our method vastly outperforms the traditional algorithms.


2005 ◽  
Vol 05 (04) ◽  
pp. 715-727
Author(s):  
QIANG WANG ◽  
HONGBO CHEN ◽  
XIAORONG XU ◽  
HAIYAN LIU

The heavy burden of computational complexity and massive storage requirement is the drawback of the standard Hough transform (SHT). To overcome the weakness of SHT, many modified approaches, for example, the probabilistic Hough transform (PHT), have been presented. However, a very important fact, which is that a line has its own width in a real digital image and the width of the line is uniform, was ignored by all of these modified algorithms of Hough transform. This phenomenon influenced the result of line detection. In this paper a new modified algorithm of Hough transform for line detection is proposed. In our algorithm, the fact mentioned above is fully considered and a strip-shaped area corresponding to the accumulate cells of HT is proposed. Experimental results have shown that our approach is efficient and promising, and the effect of detection is far better than the popular modified approaches.


Author(s):  
FRANCESCO G. B. DE NATALE ◽  
FABRIZIO GRANELLI ◽  
GIANNI VERNAZZA

Texture analysis based on the extraction of contrast features is very effective in terms of both computational complexity and discrimination capability. In this framework, max–min approaches have been proposed in the past as a simple and powerful tool to characterize a statistical texture. In the present work, a method is proposed that allows exploiting the potential of max–min approaches to efficiently solve the problem of detecting local alterations in a uniform statistical texture. Experimental results show a high defect discrimination capability, and a good attitude to real-time applications, which make it particularly attractive for the development of industrial visual inspection systems.


2007 ◽  
Vol 5 ◽  
pp. 305-311 ◽  
Author(s):  
B. Heyne ◽  
J. Götze

Abstract. In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.


Author(s):  
Thomas Bläsius ◽  
Philipp Fischbeck ◽  
Tobias Friedrich ◽  
Maximilian Katzmann

AbstractThe computational complexity of the VertexCover problem has been studied extensively. Most notably, it is NP-complete to find an optimal solution and typically NP-hard to find an approximation with reasonable factors. In contrast, recent experiments suggest that on many real-world networks the run time to solve VertexCover is way smaller than even the best known FPT-approaches can explain. We link these observations to two properties that are observed in many real-world networks, namely a heterogeneous degree distribution and high clustering. To formalize these properties and explain the observed behavior, we analyze how a branch-and-reduce algorithm performs on hyperbolic random graphs, which have become increasingly popular for modeling real-world networks. In fact, we are able to show that the VertexCover problem on hyperbolic random graphs can be solved in polynomial time, with high probability. The proof relies on interesting structural properties of hyperbolic random graphs. Since these predictions of the model are interesting in their own right, we conducted experiments on real-world networks showing that these properties are also observed in practice.


Author(s):  
Jiwei Tan ◽  
Xiaojun Wan ◽  
Jianguo Xiao

Headline generation is a task of abstractive text summarization, and previously suffers from the immaturity of natural language generation techniques. Recent success of neural sentence summarization models shows the capacity of generating informative, fluent headlines conditioned on selected recapitulative sentences. In this paper, we investigate the extension of sentence summarization models to the document headline generation task. The challenge is that extending the sentence summarization model to consider more document information will mostly confuse the model and hurt the performance. In this paper, we propose a coarse-to-fine approach, which first identifies the important sentences of a document using document summarization techniques, and then exploits a multi-sentence summarization model with hierarchical attention to leverage the important sentences for headline generation. Experimental results on a large real dataset demonstrate the proposed approach significantly improves the performance of neural sentence summarization models on the headline generation task.


Sign in / Sign up

Export Citation Format

Share Document