scholarly journals INCOMPLETENESS IN THE FINITE DOMAIN

2017 ◽  
Vol 23 (4) ◽  
pp. 405-441 ◽  
Author(s):  
PAVEL PUDLÁK

AbstractMotivated by the problem of finding finite versions of classical incompleteness theorems, we present some conjectures that go beyondNP≠coNP. These conjectures formally connect computational complexity with the difficulty of proving some sentences, which means that high computational complexity of a problem associated with a sentence implies that the sentence is not provable in a weak theory, or requires a long proof. Another reason for putting forward these conjectures is that some results in proof complexity seem to be special cases of such general statements and we want to formalize and fully understand these statements. Roughly speaking, we are trying to connect syntactic complexity, by which we mean the complexity of sentences and strengths of the theories in which they are provable, with the semantic concept of complexity of the computational problems represented by these sentences.We have introduced the most fundamental conjectures in our earlier works [27, 33–35]. Our aim in this article is to present them in a more systematic way, along with several new conjectures, and prove new connections between them and some other statements studied before.

Author(s):  
LÁSZLÓ T. KÓCZY ◽  
MICHIO SUGENO

Fuzzy control systems have proved their applicability in many areas. Their user-friend-liness and transparency certainly belong to their main advantages, and these two enable developing and tuning such controllers easily, without knowing their exact mathematical description. Nevertheless, it is of interest to know, what mathematical functions hide behind a set of fuzzy rules and an inference machine. For practical purposes it is necessary to consider real, implementable fuzzy control systems with reasonably low computational complexity. This paper discusses the problem of what types of functions are generated by realistic fuzzy control systems. In the paper the explicit formulae of the transference functions for practically important special cases are determined, controllers having rules with triangular and trapezoidal membership functions, and crisp consequents. Here we restrict our investigations to rules with a single input.


2018 ◽  
Vol 1 (1) ◽  
pp. 139-156 ◽  
Author(s):  
Wen-wen Tung ◽  
Ashrith Barthur ◽  
Matthew C. Bowers ◽  
Yuying Song ◽  
John Gerth ◽  
...  

Author(s):  
Faten Mashta ◽  
Mohieddin Wainakh ◽  
Wissam Altabban

Spectrum sensing in cognitive radio has difficult and complex requirements such as requiring speed and sensing accuracy at very low SNRs. In this paper, the authors propose a novel fully blind sequential multistage spectrum sensing detector to overcome the limitations of single stage detector and make use of the advantages of each detector in each stage. In first stage, energy detection is used because of its simplicity. However, its performance decreases at low SNRs. In second and third stage, the maximum eigenvalues detector is adopted with different smoothing factor in each stage. Maximum eigenvalues detection technique provide good detection performance at low SNRs, but it requires a high computational complexity. In this technique, the probability of detection improves as the smoothing factor raises at the expense of increasing the computational complexity. The simulation results illustrate that the proposed detector has better sensing accuracy than the three individual detectors and a computational complexity lies in between the three individual complexities.


2020 ◽  
Vol 10 (3) ◽  
pp. 24
Author(s):  
Stefania Preatto ◽  
Andrea Giannini ◽  
Luca Valente ◽  
Guido Masera ◽  
Maurizio Martina

High Efficiency Video Coding (HEVC) is the latest video standard developed by the Joint Video Exploration Team. HEVC is able to offer better compression results than preceding standards but it suffers from a high computational complexity. In particular, one of the most time consuming blocks in HEVC is the fractional-sample interpolation filter, which is used in both the encoding and the decoding processes. Integrating different state-of-the-art techniques, this paper presents an architecture for interpolation filters, able to trade quality for energy and power efficiency by exploiting approximate interpolation filters and by halving the amount of required memory with respect to state-of-the-art implementations.


Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 905 ◽  
Author(s):  
Sovanjyoti Giri ◽  
Rajarshi Roy

The Drop when seen (DWS) technique, an online network coding strategy is capable of making a broadcast transmission over erasure channels more robust. This throughput optimal strategy reduces the expected sender queue length. One major issue with the DWS technique is the high computational complexity. In this paper, we present a randomized version of the DWS technique (rDWS), where the unique strength of the DWS, which is the sender’s ability to drop a packet even before its decoding at receivers, is not compromised. Computational complexity of the algorithms is reduced with rDWS, but the encoding is not throughput optimal here. So, we perform a throughput efficiency analysis of it. Exact probabilistic analysis of innovativeness of a coefficient is found to be difficult. Hence, we carry out two individual analyses, maximum entropy analysis, average understanding analysis, and obtain a lower bound on the innovativeness probability of a coefficient. Based on these findings, innovativeness probability of a coded combination is analyzed. We evaluate the performance of our proposed scheme in terms of dropping and decoding statistics through simulation. Our analysis, supported by plots, reveals some interesting facts about innovativeness and shows that rDWS technique achieves near-optimal performance for a finite field of sufficient size.


Author(s):  
YUNYUN WANG ◽  
SONGCAN CHEN ◽  
HUI XUE

AUC-SVM directly maximizes the area under the ROC curve (AUC) through minimizing its hinge loss relaxation, and the decision function is determined by those support vector sample pairs playing the same roles as the support vector samples in SVM. Such a learning paradigm generally emphasizes more on the local discriminative information just associated with these support vectors whereas hardly takes the overall view of data into account, thereby it may incur loss of the global distribution information in data favorable for classification. Moreover, due to the high computational complexity of AUC-SVM induced by the large number of training sample pairs quadratic in the number of samples, sampling is usually adopted, incurring a further loss of the distribution information in data. In order to compensate the distribution information loss and simultaneously boost the AUC-SVM performance, in this paper, we develop a novel structure-embedded AUC-SVM (SAUC-SVM for short) through embedding the global structure information in the whole data into AUC-SVM. With such an embedding, the proposed SAUC-SVM incorporates the local discriminative information and global structure information in data into a uniform formulation and consequently guarantees better generalization performance. Comparative experiments on both synthetic and real datasets confirm its effectiveness.


Author(s):  
RYO INOKUCHI ◽  
SADAAKI MIYAMOTO

Recently kernel methods in support vector machines have widely been used in machine learning algorithms to obtain nonlinear models. Clustering is an unsupervised learning method which divides whole data set into subgroups, and popular clustering algorithms such as c-means are employing kernel methods. Other kernel-based clustering algorithms have been inspired from kernel c-means. However, the formulation of kernel c-means has a high computational complexity. This paper gives an alternative formulation of kernel-based clustering algorithms derived from competitive learning clustering. This new formulation obviously uses sequential updating or on-line learning to avoid high computational complexity. We apply kernel methods to related algorithms: learning vector quantization and self-organizing map. We moreover consider kernel methods for sequential c-means and its fuzzy version by the proposed formulation.


2014 ◽  
Vol 23 (05) ◽  
pp. 1450069
Author(s):  
FARZAD ZARGARI ◽  
SEDIGHE GHORBANI

In order to achieve higher compression performance the fidelity range extension (FRExt) amendment was added to the H.264 advanced video coding (AVC) standard. It uses both 4 × 4 and 8 × 8 integer discrete cosine transform (DCT) adaptively in the high profiles. This led to additional complexity of the initial version of the H.264/AVC encoder which had substantially high computational complexity. In this paper, we propose a new algorithm which reduces the computational complexity for software implementation of horizontal 8 × 8 integer DCT by more than 25%. Simulation results indicate 22% reduction in the computation time by using the proposed algorithm.


2018 ◽  
Vol 61 ◽  
pp. 407-431 ◽  
Author(s):  
William S. Zwicker

We introduce the (j,k)-Kemeny rule -- a generalization of Kemeny's voting rule that aggregates j-chotomous weak orders into a k-chotomous weak order. Special cases of (j,k)-Kemeny include approval voting, the mean rule and Borda mean rule, as well as the Borda count and plurality voting. Why, then, is the winner problem computationally tractable for each of these other rules, but intractable for Kemeny? We show that intractability of winner determination for the (j,k)-Kemeny rule first appears at the j=3, k=3 level. The proof rests on a reduction of max cut to a related problem on weighted tournaments, and reveals that computational complexity arises from the cyclic part in the fundamental decomposition of a weighted tournament into cyclic and cocyclic components. Thus the existence of majority cycles -- the engine driving both Arrow's impossibility theorem and the Gibbard-Satterthwaite theorem -- also serves as a source of computational complexity in social choice.


Author(s):  
Y. Z. Gu ◽  
K. Qin ◽  
Y. X. Chen ◽  
M. X. Yue ◽  
T. Guo

Massive trajectory data contains wealth useful information and knowledge. Spectral clustering, which has been shown to be effective in finding clusters, becomes an important clustering approaches in the trajectory data mining. However, the traditional spectral clustering lacks the temporal expansion on the algorithm and limited in its applicability to large-scale problems due to its high computational complexity. This paper presents a parallel spatiotemporal spectral clustering based on multiple acceleration solutions to make the algorithm more effective and efficient, the performance is proved due to the experiment carried out on the massive taxi trajectory dataset in Wuhan city, China.


Sign in / Sign up

Export Citation Format

Share Document