Extension of the BCH decoding algorithm to decode binary cyclic codes up to their maximum error correction capacities

1988 ◽  
Vol 34 (5) ◽  
pp. 1332-1340 ◽  
Author(s):  
P. Stevens
Mathematics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 15
Author(s):  
Lucky Galvez ◽  
Jon-Lark Kim

Practically good error-correcting codes should have good parameters and efficient decoding algorithms. Some algebraically defined good codes, such as cyclic codes, Reed–Solomon codes, and Reed–Muller codes, have nice decoding algorithms. However, many optimal linear codes do not have an efficient decoding algorithm except for the general syndrome decoding which requires a lot of memory. Therefore, a natural question to ask is which optimal linear codes have an efficient decoding. We show that two binary optimal [ 36 , 19 , 8 ] linear codes and two binary optimal [ 40 , 22 , 8 ] codes have an efficient decoding algorithm. There was no known efficient decoding algorithm for the binary optimal [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] codes. We project them onto the much shorter length linear [ 9 , 5 , 4 ] and [ 10 , 6 , 4 ] codes over G F ( 4 ) , respectively. This decoding algorithm, called projection decoding, can correct errors of weight up to 3. These [ 36 , 19 , 8 ] and [ 40 , 22 , 8 ] codes respectively have more codewords than any optimal self-dual [ 36 , 18 , 8 ] and [ 40 , 20 , 8 ] codes for given length and minimum weight, implying that these codes are more practical.


2019 ◽  
Vol 9 (5) ◽  
pp. 831
Author(s):  
Yusheng Xing ◽  
Guofang Tu

In this paper, we propose a low-complexity ordered statistics decoding (OSD) algorithm called threshold-based OSD (TH-OSD) that uses a threshold on the discrepancy of the candidate codewords to speed up the decoding of short polar codes. To determine the threshold, we use the probability distribution of the discrepancy value of the maximal likelihood codeword with a predefined parameter controlling the trade-off between the error correction performance and the decoding complexity. We also derive an upper-bound of the word error rate (WER) for the proposed algorithm. The complexity analysis shows that our algorithm is faster than the conventional successive cancellation (SC) decoding algorithm in mid-to-high signal-to-noise ratio (SNR) situations and much faster than the SC list (SCL) decoding algorithm. Our addition of a list approach to our proposed algorithm further narrows the error correction performance gap between our TH-OSD and OSD. Our simulation results show that, with appropriate thresholds, our proposed algorithm achieves performance close to OSD’s while testing significantly fewer codewords than OSD, especially with low SNR values. Even a small list is sufficient for TH-OSD to match OSD’s error rate in short-code scenarios. The algorithm can be easily extended to longer code lengths.


2013 ◽  
Vol 12 (04) ◽  
pp. 1250185 ◽  
Author(s):  
FERNANDO HERNANDO ◽  
DIEGO RUANO

We propose a decoding algorithm for the (u | u + v)-construction that decodes up to half of the minimum distance of the linear code. We extend this algorithm for a class of matrix-product codes in two different ways. In some cases, one can decode beyond the error-correction capability of the code.


2018 ◽  
Vol 7 (03) ◽  
pp. 23781-23784
Author(s):  
Rajarshini Mishra

Low-density parity-check (LDPC) have been shown to have good error correcting performance approaching Shannon’s limit. Good error correcting performance enables efficient and reliable communication. However, a LDPC code decoding algorithm needs to be executed efficiently to meet cost , time, power and bandwidth requirements of target applications. Quasi-cyclic low-density parity-check (QC-LDPC) codes are an important subclass of LDPC codes that are known as one of the most effective error controlling methods. Quasi cyclic codes are known to possess some degree of regularity. Many important communication standards such as DVB-S2 and 802.16e use these codes. The proposed Optimized Min-Sum decoding algorithm performs very close to the Sum-Product decoding while preserving the main features of the Min-Sum decoding, that is low complexity and independence with respect to noise variance estimation errors.Proposed decoder is well matched for VLSI implementation and will be implemented on Xilinx FPGA family


Author(s):  
A. V. Kushnerov ◽  
V. A. Lipinski ◽  
M. N. Koroliova

The Bose – Chaudhuri – Hocquenghem type of linear cyclic codes (BCH codes) is one of the most popular and widespread error-correcting codes. Their close connection with the theory of Galois fields gave an opportunity to create a theory of the norms of syndromes for BCH codes, namely, syndrome invariants of the G-orbits of errors, and to develop a theory of polynomial invariants of the G-orbits of errors. This theory as a whole served as the basis for the development of effective permutation polynomial-norm methods and error correction algorithms that significantly reduce the influence of the selector problem. To date, these methods represent the only approach to error correction with non-primitive BCH codes, the multiplicity of which goes beyond design boundaries. This work is dedicated to a special error-correcting code class – generic Bose – Chaudhuri – Hocquenghem codes or simply GBCH-codes. Sufficiently accurate evaluation of the quantity of such codes in each length was produced during our work. We have investigated some properties and connections between different GBCH-codes. Special attention was devoted to codes with constructive distances of 3 and 5, as those codes are usual for practical use. Their almost complete description is given in the range of lengths from 7 to 107. The paper contains a fairly clear theoretical classification of GBCH-codes. Special attention is paid to the corrective capabilities of the codes of this class, namely, to the calculation of the minimal distances of these codes with various parameters. The codes are found whose corrective capabilities significantly exceed those of the well-known GBCH-codes with the same design parameters.


Author(s):  
Zhong-xun Wang ◽  
Yang Xi ◽  
Zhan-kai Bao

In the nonbinary low-density parity check (NB-LDPC) codes decoding algorithms, the iterative hard reliability based on majority logic decoding (IHRB-MLGD) algorithm has poor error correction performance. The essential reason is that the hard information is used in the initialization and iterative processes. For the problem of partial loss of information, when the reliability is assigned during initialization, the error correction performance is improved by modifying the assignment of reliability at initialization. The initialization process is determined by the probability of occurrence of the number of erroneous bits in the symbol and the Hamming distance. In addition, the IHRB-MLGD decoding algorithm uses the hard decision in the iterative decoding process. The improved algorithm adds soft decision information in the iterative process, which improves the error correction performance while only slightly increasing the decoding complexity, and improves the reliability accumulation process which makes the algorithm more stable. The simulation results indicate that the proposed algorithm has a better decoding performance than IHRB algorithm.


Sign in / Sign up

Export Citation Format

Share Document