scholarly journals On NACK-Based rDWS Algorithm for Network Coded Broadcast

Entropy ◽  
2019 ◽  
Vol 21 (9) ◽  
pp. 905 ◽  
Author(s):  
Sovanjyoti Giri ◽  
Rajarshi Roy

The Drop when seen (DWS) technique, an online network coding strategy is capable of making a broadcast transmission over erasure channels more robust. This throughput optimal strategy reduces the expected sender queue length. One major issue with the DWS technique is the high computational complexity. In this paper, we present a randomized version of the DWS technique (rDWS), where the unique strength of the DWS, which is the sender’s ability to drop a packet even before its decoding at receivers, is not compromised. Computational complexity of the algorithms is reduced with rDWS, but the encoding is not throughput optimal here. So, we perform a throughput efficiency analysis of it. Exact probabilistic analysis of innovativeness of a coefficient is found to be difficult. Hence, we carry out two individual analyses, maximum entropy analysis, average understanding analysis, and obtain a lower bound on the innovativeness probability of a coefficient. Based on these findings, innovativeness probability of a coded combination is analyzed. We evaluate the performance of our proposed scheme in terms of dropping and decoding statistics through simulation. Our analysis, supported by plots, reveals some interesting facts about innovativeness and shows that rDWS technique achieves near-optimal performance for a finite field of sufficient size.

2007 ◽  
Vol 188 (1) ◽  
pp. 638-640 ◽  
Author(s):  
G.R. Jahanshahloo ◽  
M. Soleimani-damaneh ◽  
A. Mostafaee

2018 ◽  
Vol 1 (1) ◽  
pp. 139-156 ◽  
Author(s):  
Wen-wen Tung ◽  
Ashrith Barthur ◽  
Matthew C. Bowers ◽  
Yuying Song ◽  
John Gerth ◽  
...  

Author(s):  
Faten Mashta ◽  
Mohieddin Wainakh ◽  
Wissam Altabban

Spectrum sensing in cognitive radio has difficult and complex requirements such as requiring speed and sensing accuracy at very low SNRs. In this paper, the authors propose a novel fully blind sequential multistage spectrum sensing detector to overcome the limitations of single stage detector and make use of the advantages of each detector in each stage. In first stage, energy detection is used because of its simplicity. However, its performance decreases at low SNRs. In second and third stage, the maximum eigenvalues detector is adopted with different smoothing factor in each stage. Maximum eigenvalues detection technique provide good detection performance at low SNRs, but it requires a high computational complexity. In this technique, the probability of detection improves as the smoothing factor raises at the expense of increasing the computational complexity. The simulation results illustrate that the proposed detector has better sensing accuracy than the three individual detectors and a computational complexity lies in between the three individual complexities.


2020 ◽  
Vol 10 (3) ◽  
pp. 24
Author(s):  
Stefania Preatto ◽  
Andrea Giannini ◽  
Luca Valente ◽  
Guido Masera ◽  
Maurizio Martina

High Efficiency Video Coding (HEVC) is the latest video standard developed by the Joint Video Exploration Team. HEVC is able to offer better compression results than preceding standards but it suffers from a high computational complexity. In particular, one of the most time consuming blocks in HEVC is the fractional-sample interpolation filter, which is used in both the encoding and the decoding processes. Integrating different state-of-the-art techniques, this paper presents an architecture for interpolation filters, able to trade quality for energy and power efficiency by exploiting approximate interpolation filters and by halving the amount of required memory with respect to state-of-the-art implementations.


2017 ◽  
Vol 23 (4) ◽  
pp. 405-441 ◽  
Author(s):  
PAVEL PUDLÁK

AbstractMotivated by the problem of finding finite versions of classical incompleteness theorems, we present some conjectures that go beyondNP≠coNP. These conjectures formally connect computational complexity with the difficulty of proving some sentences, which means that high computational complexity of a problem associated with a sentence implies that the sentence is not provable in a weak theory, or requires a long proof. Another reason for putting forward these conjectures is that some results in proof complexity seem to be special cases of such general statements and we want to formalize and fully understand these statements. Roughly speaking, we are trying to connect syntactic complexity, by which we mean the complexity of sentences and strengths of the theories in which they are provable, with the semantic concept of complexity of the computational problems represented by these sentences.We have introduced the most fundamental conjectures in our earlier works [27, 33–35]. Our aim in this article is to present them in a more systematic way, along with several new conjectures, and prove new connections between them and some other statements studied before.


2018 ◽  
Vol 12 (2) ◽  
pp. 101-118 ◽  
Author(s):  
Prabhat Kushwaha

Abstract In 2004, Muzereau, Smart and Vercauteren [A. Muzereau, N. P. Smart and F. Vercauteren, The equivalence between the DHP and DLP for elliptic curves used in practical applications, LMS J. Comput. Math. 7 2004, 50–72] showed how to use a reduction algorithm of the discrete logarithm problem to Diffie–Hellman problem in order to estimate lower bound for the Diffie–Hellman problem on elliptic curves. They presented their estimates on various elliptic curves that are used in practical applications. In this paper, we show that a much tighter lower bound for the Diffie–Hellman problem on those curves can be achieved if one uses the multiplicative group of a finite field as auxiliary group. The improved lower bound estimates of the Diffie–Hellman problem on those recommended curves are also presented. Moreover, we have also extended our idea by presenting similar estimates of DHP on some more recommended curves which were not covered before. These estimates of DHP on these curves are currently the tightest which lead us towards the equivalence of the Diffie–Hellman problem and the discrete logarithm problem on these recommended elliptic curves.


Author(s):  
YUNYUN WANG ◽  
SONGCAN CHEN ◽  
HUI XUE

AUC-SVM directly maximizes the area under the ROC curve (AUC) through minimizing its hinge loss relaxation, and the decision function is determined by those support vector sample pairs playing the same roles as the support vector samples in SVM. Such a learning paradigm generally emphasizes more on the local discriminative information just associated with these support vectors whereas hardly takes the overall view of data into account, thereby it may incur loss of the global distribution information in data favorable for classification. Moreover, due to the high computational complexity of AUC-SVM induced by the large number of training sample pairs quadratic in the number of samples, sampling is usually adopted, incurring a further loss of the distribution information in data. In order to compensate the distribution information loss and simultaneously boost the AUC-SVM performance, in this paper, we develop a novel structure-embedded AUC-SVM (SAUC-SVM for short) through embedding the global structure information in the whole data into AUC-SVM. With such an embedding, the proposed SAUC-SVM incorporates the local discriminative information and global structure information in data into a uniform formulation and consequently guarantees better generalization performance. Comparative experiments on both synthetic and real datasets confirm its effectiveness.


Sign in / Sign up

Export Citation Format

Share Document