check matrix
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 12)

H-INDEX

9
(FIVE YEARS 0)

Author(s):  
Alireza Hasani ◽  
Lukasz Lopacinski ◽  
Rolf Kraemer

AbstractLayered decoding (LD) facilitates a partially parallel architecture for performing belief propagation (BP) algorithm for decoding low-density parity-check (LDPC) codes. Such a schedule for LDPC codes has, in general, reduced implementation complexity compared to a fully parallel architecture and higher convergence rate compared to both serial and parallel architectures, regardless of the codeword length or code-rate. In this paper, we introduce a modified shuffling method which shuffles the rows of the parity-check matrix (PCM) of a quasi-cyclic LDPC (QC-LDPC) code, yielding a PCM in which each layer can be produced by the circulation of its above layer one symbol to the right. The proposed shuffling scheme additionally guarantees the columns of a layer of the shuffled PCM to be either zero weight or single weight. This condition has a key role in further decreasing LD complexity. We show that due to these two properties, the number of occupied look-up tables (LUTs) on a field programmable gate array (FPGA) reduces by about 93% and consumed on-chip power by nearly 80%, while the bit error rate (BER) performance is maintained. The only drawback of the shuffling is the degradation of decoding throughput, which is negligible for low values of $$E_b/N_0$$ E b / N 0 until the BER of 1e−6.


2021 ◽  
Author(s):  
Alla Levina ◽  
Gleb Ryaskin ◽  
Sergey Taranov ◽  
Anna Polubaryeva
Keyword(s):  

2021 ◽  
Vol 4 (9(112)) ◽  
pp. 46-53
Author(s):  
Viktor Durcek ◽  
Michal Kuba ◽  
Milan Dado

This paper investigates the construction of random-structure LDPC (low-density parity-check) codes using Progressive Edge-Growth (PEG) algorithm and two proposed algorithms for removing short cycles (CB1 and CB2 algorithm; CB stands for Cycle Break). Progressive Edge-Growth is an algorithm for computer-based design of random-structure LDPC codes, the role of which is to generate a Tanner graph (a bipartite graph, which represents a parity-check matrix of an error-correcting channel code) with as few short cycles as possible. Short cycles, especially the shortest ones with a length of 4 edges, in Tanner graphs of LDPC codes can degrade the performance of their decoding algorithm, because after certain number of decoding iterations, the information sent through its edges is no longer independent. The main contribution of this paper is the unique approach to the process of removing short cycles in the form of CB2 algorithm, which erases edges from the code's parity-check matrix without decreasing the minimum Hamming distance of the code. The two cycle-removing algorithms can be used to improve the error-correcting performance of PEG-generated (or any other) LDPC codes and achieved results are provided. All these algorithms were used to create a PEG LDPC code which rivals the best-known PEG-generated LDPC code with similar parameters provided by one of the founders of LDPC codes. The methods for generating the mentioned error-correcting codes are described along with simulations which compare the error-correcting performance of the original codes generated by the PEG algorithm, the PEG codes processed by either CB1 or CB2 algorithm and also external PEG code published by one of the founders of LDPC codes


2021 ◽  
Author(s):  
Surdive Atamewoue Tsafack

This chapter present some new perspectives in the field of coding theory. In fact notions of fuzzy sets and hyperstructures which are consider here as non classical structures are use in the construction of linear codes as it is doing for fields and rings. We study the properties of these classes of codes using well known notions like the orthogonal of a code, generating matrix, parity check matrix and polynomials. In some cases particularly for linear codes construct on a Krasner hyperfield we compare them with those construct on finite field called here classical structures, and we obtain that linear codes construct on a Krasner hyperfield have more codes words with the same parameters.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Hansong Du ◽  
Jiufen Liu ◽  
Yuguo Tian ◽  
Xiangyang Luo

Compared with traditional steganography, adaptive steganography based on STC (Syndrome-Trellis Codes) has extremely high antidetection ability and has been a mainstream and hot research direction in the field of information hiding over the past decades. However, it is noted, in specific scenarios, that a small number of methods can extract data from STC-based adaptive steganography, indicating security risks related to such algorithms. In this manuscript, the cryptographic secrecy of this kind of steganography is analyzed, on condition of two common attacks: stego-only attack and known-cover attack, respectively, from three perspectives: steganographic key equivocation, message equivocation, and unicity distance of the steganographic key. Focusing on the special layout characteristics of the parity-check matrix of STC, under the two attack conditions, the theoretical boundaries of the steganographic key equivocation function, the message equivocation function, and the unicity distance of the steganographic key are separately obtained, showing the impact of the three elements: the submatrix size, the randomness of the data, and the cover object on the cryptographic secrecy of the STC-based adaptive steganography, resulting in a theoretical reference to accurately judge the cryptographic secrecy of such steganography and design more secure steganography methods.


Author(s):  
Gianira N. Alfarano ◽  
Julia Lieb ◽  
Joachim Rosenthal

AbstractIn this paper, a construction of $$(n,k,\delta )$$ ( n , k , δ ) LDPC convolutional codes over arbitrary finite fields, which generalizes the work of Robinson and Bernstein and the later work of Tong is provided. The sets of integers forming a (k, w)-(weak) difference triangle set are used as supports of some columns of the sliding parity-check matrix of an $$(n,k,\delta )$$ ( n , k , δ ) convolutional code, where $$n\in {\mathbb {N}}$$ n ∈ N , $$n>k$$ n > k . The parameters of the convolutional code are related to the parameters of the underlying difference triangle set. In particular, a relation between the free distance of the code and w is established as well as a relation between the degree of the code and the scope of the difference triangle set. Moreover, we show that some conditions on the weak difference triangle set ensure that the Tanner graph associated to the sliding parity-check matrix of the convolutional code is free from $$2\ell $$ 2 ℓ -cycles not satisfying the full rank condition over any finite field. Finally, we relax these conditions and provide a lower bound on the field size, depending on the parity of $$\ell $$ ℓ , that is sufficient to still avoid $$2\ell $$ 2 ℓ -cycles. This is important for improving the performance of a code and avoiding the presence of low-weight codewords and absorbing sets.


2021 ◽  
Author(s):  
Shyam Saurabh

<p>Structured LDPC codes have been constructed using balanced incomplete block (BIB) designs, resolvable BIB designs, mutually orthogonal Latin rectangles, partial geometries, group divisible designs, resolvable group divisible designs and finite geometries. Here we have constructed LDPC codes from <i>α </i>–<b> </b>resolvable BIB and Group divisible designs. The sub–matrices of incidence matrix of such block design are used as a parity – check matrix of the code which satisfy row – column constraint. Here the girth of the proposed code is at least six and the corresponding LDPC code (or Tanner graph) is free of 4– cycles. </p>


2021 ◽  
Author(s):  
Shyam Saurabh

<p>Structured LDPC codes have been constructed using balanced incomplete block (BIB) designs, resolvable BIB designs, mutually orthogonal Latin rectangles, partial geometries, group divisible designs, resolvable group divisible designs and finite geometries. Here we have constructed LDPC codes from <i>α </i>–<b> </b>resolvable BIB and Group divisible designs. The sub–matrices of incidence matrix of such block design are used as a parity – check matrix of the code which satisfy row – column constraint. Here the girth of the proposed code is at least six and the corresponding LDPC code (or Tanner graph) is free of 4– cycles. </p>


Author(s):  
Jagannath Samanta ◽  
Akash Kewat

Recently, there have been continuous rising interests of multi-bit error correction codes (ECCs) for protecting memory cells from soft errors which may also enhance the reliability of memory systems. The single error correction and double error detection (SEC-DED) codes are generally employed in many high-speed memory systems. In this paper, Hsiao-based SEC-DED codes are optimized based on two proposed optimization algorithms employed in parity check matrix and error correction logic. Theoretical area complexity of SEC-DED codecs require maximum 49.29%, 18.64% and 49.21% lesser compared to the Hsiao codes [M. Y. Hsiao, A class of optimal minimum odd-weight-column SEC-DED codes, IBM J. Res. Dev. 14 (1970) 395–401], Reviriego et al. codes [P. Reviriego, S. Pontarelli, J. A. Maestro and M. Ottavi, A method to construct low delay single error correction codes for protecting data bits only, IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 32 (2013) 479–483] and Liu et al. codes [S. Liu, P. Reviriego, L. Xiao and J. A. Maestro, A method to recover critical bits under a double error in SEC-DED protected memories, Microelectron. Reliab. 73 (2017) 92–96], respectively. Proposed codec is designed and implemented both in field programmable gate array (FPGA) and ASIC platforms. The synthesized SEC-DED codecs need 31.14% lesser LUTs than the original Hsiao code. Optimized codec is faster than the existing related codec without affecting its power consumption. These compact and faster SEC-DED codecs are employed in cache memory to enhance the reliability.


Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 605
Author(s):  
Elad Romanov ◽  
Or Ordentlich

Motivated by applications in unsourced random access, this paper develops a novel scheme for the problem of compressed sensing of binary signals. In this problem, the goal is to design a sensing matrix A and a recovery algorithm, such that the sparse binary vector x can be recovered reliably from the measurements y=Ax+σz, where z is additive white Gaussian noise. We propose to design A as a parity check matrix of a low-density parity-check code (LDPC) and to recover x from the measurements y using a Markov chain Monte Carlo algorithm, which runs relatively fast due to the sparse structure of A. The performance of our scheme is comparable to state-of-the-art schemes, which use dense sensing matrices, while enjoying the advantages of using a sparse sensing matrix.


Sign in / Sign up

Export Citation Format

Share Document