scholarly journals Quantization Error Correction Schemes for Lattice-Reduction Aided MIMO Detectors

2011 ◽  
Vol 1 (2) ◽  
Author(s):  
Tien Duc Nguyen ◽  
Tadashi Fujino ◽  
Xuan Nam Tran

Lattice reduction aided (LRA) linear detectors have been known to achieve near optimal performance at low complexity. However, one weakness of LRA detector is that the quantization step in LRA detector is not optimal. Based on simulation results, we show that most of detection errors in LRA linear detectors are due to quantization errors. We then propose two methods to correct the quantization errors. In the first method, sphere detectors are introduced to correct quantization errors at low additional complexity. As a second approach, we propose a list quantization scheme which can generate a list of candidate symbols from the original LRA estimated symbols. From these listed symbols, decisions are made according to the minimum Euclidean distance between the received and estimated points. It is shown by simulations that both methods provide significant BER performance improvements with only a small additional complexity.

Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1314
Author(s):  
Taeoh Kim ◽  
Hyobeen Park ◽  
Yunho Jung ◽  
Seongjoo Lee

In this paper, we propose tag sensor using multi-antennas in a Wi-Fi backscatter system, which results in an improved data rate or reliability of the signal transmitted from a tag sensor to a reader. The existing power level modulation method, which is proposed to improve data rate in a Wi-Fi backscatter system, has low reliability due to the reduced distance between symbols. To address this problem, we propose a Wi-Fi backscatter system that obtains channel diversity by applying multiple antennas. Two backscatter methods are described for improving the data rate or reliability in the proposed system. In addition, we propose three low complexity demodulation methods to address the high computational complexity problem caused by multiple antennas: (1) SET (subcarrier energy-based threshold) method, (2) TCST (tag’s channel state-based threshold) method, and (3) SED (similar Euclidean distance) method. In order to verify the performance of the proposed backscatter method and low complexity demodulation schemes, the 802.11 TGn (task group n) channel model was utilized in simulation. In this paper, the proposed tag sensor structure was compared with existing methods using only sub-channels with a large difference in received CSI (channel state information) values or adopting power-level modulation. The proposed scheme showed about 10 dB better bit error rate (BER) performance and throughput. Also, proposed low complexity demodulation schemes were similar in BER performance with a difference of up to 1 dB and the computational complexity was reduced by up to 60% compared to the existing Euclidean distance method.


2011 ◽  
Vol 271-273 ◽  
pp. 458-463
Author(s):  
Rui Ping Chen ◽  
Zhong Xun Wang ◽  
Xin Qiao Yu

Decoding algorithms with strong practical value not only have good decoding performance, but also have the computation complexity as low as possible. For this purpose, the paper points out the modified min-sum decoding algorithm(M-MSA). On the condition of no increasing in the decoding complexity, it makes the error-correcting performance improved by adding the appropriate scaling factor based on the min-sum algorithm(MSA), and it is very suitable for hardware implementation. Simulation results show that this algorithm has good BER performance, low complexity and low hardware resource utilization, and it would be well applied in the future.


2020 ◽  
Vol 10 (23) ◽  
pp. 8572
Author(s):  
Hao Wang ◽  
Mingqi Li ◽  
Chao Wang

A non-uniform constellation (NUC) can effectively reduce the gap between bit-interleaved coded modulation (BICM) capacity and Shannon capacity, which has been utilized in recent wireless broadcasting systems. However, the soft demapping algorithm needs a lot of Euclidean distance (ED) calculations and comparisons, which brings great demapping complexity to NUC. A universal low-complexity NUC demapping algorithm is proposed in this paper, which creates subsets based on the quadrant of the two-dimensional NUC (2D-NUC) received symbol or the sign of the in-phase (I)/quadrature (Q) component of the one-dimensional NUC (1D-NUC) received symbol. ED calculations and comparisons are only carried out on the constellation points contained in subsets. To further reduce the number of constellation points contained in subsets, the proposed algorithm takes advantage of the condensation property of NUC and regards a constellation cluster containing several constellation points as a virtual point. Analysis and simulation results show that, compared with the Max-Log-MAP algorithm, the proposed algorithm can greatly reduce the demapping complexity of NUC with negligible performance loss.


2012 ◽  
Vol 195-196 ◽  
pp. 96-103
Author(s):  
Ke Wen Liu ◽  
Quan Liu

Soft-output complex list sphere decoding algorithm is a low-complexity MIMO detection algorithm and its BER performance approximates that of Maximum-Likelihood. However, it has a problem of not fixed complexity, and which make it very difficult to implement. To resolve this and try best to retain the advantages of the algorithm, a modified algorithmfixed complex list sphere decoding algorithm was proposed. Based on LTE TDD system, this paper studies the performance of the FCLSD algorithm. The simulation results show that: the BER performance of the FCLSD algorithm is close to that of the CLSD algorithm. However, when the number of antennas and modulation order increasing, the FCLSD algorithm is non-constrained of spherical radius and has fixed complexity. In addition, hardware implementation of the FCLSD algorithm could be carried out by parallel processing, thereby greatly reducing the algorithm complexity. So it is a high-performance algorithm of great potential.


2021 ◽  
Author(s):  
Andrey Rashich ◽  
Aleksei Krylov ◽  
Dmitrii Fadeev ◽  
Kirill Sinjutin

<div>The VLSI architectures for stack or priority queue (PQ) are required in the implementation of stack or sequential decoders of polar codes. Such type of decoders provide good BER performance keeping complexity low. Extracting the best and the worst paths from PQ is the most complex operation in terms of both latency and complexity, because this operation requires full search along priority queue. In this work we propose a low latency and low complexity parallel hardware architecture for PQ, which is based on the systolic sorter and simplified sorting primitives. The simulation results show that just small BER degradation is introduced compared to ideal full sorting networks. Proposed PQ architecture is implemented in FPGA, the synthesis results are presented for all components of PQ.</div>


2014 ◽  
Vol 543-547 ◽  
pp. 2004-2008
Author(s):  
Xin Min Li ◽  
Bao Ming Bai ◽  
Juan Zhao

The existing methods based on convex-optimization theory, which use the concept of SINR, can just design the optimal precoder for each user with single antenna. In this paper, we design the optimal precoding matrices for multi-user MIMO downlinks by solving the optimization problem that minimizes total transmit power subject to signal-leakage-plus-noise-ratio (SLNR) constraints. Because SLNR of each user is determined by its own precoding matrix and is independent of other users, the goal problem can be separated into a series of decoupled low-complexity quadratically constrained quadratic programs (QCQPs). Using the semidefinite relaxation (SDR) technique, these QCQPs can be reformulated into the semidefinite programs (SDP) and be solved effectively. Simulation results show that proposed precoding scheme is quite feasible when each user has two receive antennas, and it has better bit error rate (BER) performance than the original maximal-SLNR precoding scheme when SLNR of each user satisfies large threshold value.


2021 ◽  
Author(s):  
Andrey Rashich ◽  
Aleksei Krylov ◽  
Dmitrii Fadeev ◽  
Kirill Sinjutin

<div>The VLSI architectures for stack or priority queue (PQ) are required in the implementation of stack or sequential decoders of polar codes. Such type of decoders provide good BER performance keeping complexity low. Extracting the best and the worst paths from PQ is the most complex operation in terms of both latency and complexity, because this operation requires full search along priority queue. In this work we propose a low latency and low complexity parallel hardware architecture for PQ, which is based on the systolic sorter and simplified sorting primitives. The simulation results show that just small BER degradation is introduced compared to ideal full sorting networks. Proposed PQ architecture is implemented in FPGA, the synthesis results are presented for all components of PQ.</div>


2021 ◽  
Vol 13 (2) ◽  
pp. 312
Author(s):  
Xiongpeng Tang ◽  
Jianyun Zhang ◽  
Guoqing Wang ◽  
Gebdang Biangbalbe Ruben ◽  
Zhenxin Bao ◽  
...  

The demand for accurate long-term precipitation data is increasing, especially in the Lancang-Mekong River Basin (LMRB), where ground-based data are mostly unavailable and inaccessible in a timely manner. Remote sensing and reanalysis quantitative precipitation products provide unprecedented observations to support water-related research, but these products are inevitably subject to errors. In this study, we propose a novel error correction framework that combines products from various institutions. The NASA Modern-Era Retrospective Analysis for Research and Applications (AgMERRA), the Asian Precipitation Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), the Climate Hazards group InfraRed Precipitation with Stations (CHIRPS), the Multi-Source Weighted-Ensemble Precipitation Version 1.0 (MSWEP), and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Records (PERSIANN) were used. Ground-based precipitation data from 1998 to 2007 were used to select precipitation products for correction, and the remaining 1979–1997 and 2008–2014 observe data were used for validation. The resulting precipitation products MSWEP-QM derived from quantile mapping (QM) and MSWEP-LS derived from linear scaling (LS) are evaluated by statistical indicators and hydrological simulation across the LMRB. Results show that the MSWEP-QM and MSWEP-LS can better capture major annual precipitation centers, have excellent simulation results, and reduce the mean BIAS and mean absolute BIAS at most gauges across the LMRB. The two corrected products presented in this study constitute improved climatological precipitation data sources, both time and space, outperforming the five raw gridded precipitation products. Among the two corrected products, in terms of mean BIAS, MSWEP-LS was slightly better than MSWEP-QM at grid-scale, point scale, and regional scale, and it also had better simulation results at all stations except Strung Treng. During the validation period, the average absolute value BIAS of MSWEP-LS and MSWEP-QM decreased by 3.51% and 3.4%, respectively. Therefore, we recommend that MSWEP-LS be used for water-related scientific research in the LMRB.


Sign in / Sign up

Export Citation Format

Share Document