scholarly journals A Deep Learning Approach for MIMO-NOMA Downlink Signal Detection

Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2526 ◽  
Author(s):  
Chuan Lin ◽  
Qing Chang ◽  
Xianxu Li

As a key candidate technique for fifth-generation (5G) mobile communication systems, non-orthogonal multiple access (NOMA) has attracted considerable attention in the field of wireless communication. Successive interference cancellation (SIC) is the main NOMA detection method applied at receivers for both uplink and downlink NOMA transmissions. However, SIC is limited by the receiver complex and error propagation problems. Toward this end, we explore a high-performance, high-efficiency tool—deep learning (DL). In this paper, we propose a learning method that automatically analyzes the channel state information (CSI) of the communication system and detects the original transmit sequences. In contrast to existing SIC schemes, which must search for the optimal order of the channel gain and remove the signal with higher power allocation factor while detecting a signal with a lower power allocation factor, the proposed deep learning method can combine the channel estimation process with recovery of the desired signal suffering from channel distortion and multiuser signal superposition. Extensive performance simulations were conducted for the proposed MIMO-NOMA-DL system, and the results were compared with those of the conventional SIC method. According to our simulation results, the deep learning method can successfully address channel impairment and achieve good detection performance. In contrast to implementing well-designed detection algorithms, MIMO-NOMA-DL searches for the optimal solution via a neural network (NN). Consequently, deep learning is a powerful and effective tool for NOMA signal detection.

2021 ◽  
Vol 5 (4) ◽  
pp. 334-341
Author(s):  
D Venkata Ratnam ◽  
◽  
K Nageswara Rao ◽  

<abstract> <p>The advanced neural network methods solve significant signal estimation and channel characterization difficulties in the next-generation 5G wireless communication systems. The number of transmitted signal copies received through multiple paths at the receiver leads to delay spread, which intern causes interference in communication. These adverse effects of the interference can be mitigated with the orthogonal frequency division modulation (OFDM) technique. Furthermore, the proper signal detection methods optimal channel estimation enhances the performance of the multicarrier wireless communication system. In this paper, bi-directional long short-term memory (Bi-LSTM) based deep learning method is implemented to estimate the channel in different multipath scenarios. The impact of the pilots and cyclic prefix on the performance of Bi LSTM algorithm is analyzed. It is evident from the symbol-error rate (SER) results that the Bi-LSTM algorithm performs better than the state of art channel estimation methods known as the Minimum Mean Square and Error (MMSE) estimation method.</p> </abstract>


Author(s):  
Ravisankar Malladi ◽  
Manoj Kumar Beuria ◽  
Ravi Shankar ◽  
Sudhansu Sekhar Singh

In modern wireless communication scenarios, non-orthogonal multiple access (NOMA) provides high throughput and spectral efficiency for fifth generation (5G) and beyond 5G systems. Traditional NOMA detectors are based on successive interference cancellation (SIC) techniques at both uplink and downlink NOMA transmissions. However, due to imperfect SIC, these detectors are not suitable for defense applications. In this paper, we investigate the 5G multiple-input multiple-output NOMA deep learning technique for defense applications and proposed a learning approach that investigates the communication system’s channel state information automatically and identifies the initial transmission sequences. With the use of the proposed deep neural network, the optimal solution is provided, and performance is much better than the traditional SIC-based NOMA detectors. Through simulations, the analytical outcomes are verified.


2020 ◽  
Author(s):  
Arthur Sousa de Sena ◽  
Pedro Nardelli

This paper addresses multi-user multi-cluster massive multiple-input-multiple-output (MIMO) systems with non-orthogonal multiple access (NOMA). Assuming the downlink mode, and taking into consideration the impact of imperfect successive interference cancellation (SIC), an in-depth analytical analysis is carried out, in which closed-form expressions for the outage probability and ergodic rates are derived. Subsequently, the power allocation coefficients of users within each sub-group are optimized to maximize fairness. The considered power optimization is simplified to a convex problem, which makes it possible to obtain the optimal solution via Karush-Kuhn-Tucker (KKT) conditions. Based on the achieved solution, we propose an iterative algorithm to provide fairness also among different sub-groups. Simulation results alongside with insightful discussions are provided to investigate the impact of imperfect SIC and demonstrate the fairness superiority of the proposed dynamic power allocation policies. For example, our results show that if the residual error propagation levels are high, the employment of orthogonal multiple access (OMA) is always preferable than NOMA. It is also shown that the proposed power allocation outperforms conventional massive MIMO-NOMA setups operating with fixed power allocation strategies in terms of outage probability.


2021 ◽  
Author(s):  
Shiyou Lian

Starting from finding approximate value of a function, introduces the measure of approximation-degree between two numerical values, proposes the concepts of “strict approximation” and “strict approximation region”, then, derives the corresponding one-dimensional interpolation methods and formulas, and then presents a calculation model called “sum-times-difference formula” for high-dimensional interpolation, thus develops a new interpolation approach, that is, ADB interpolation. ADB interpolation is applied to the interpolation of actual functions with satisfactory results. Viewed from principle and effect, the interpolation approach is of novel idea, and has the advantages of simple calculation, stable accuracy, facilitating parallel processing, very suiting for high-dimensional interpolation, and easy to be extended to the interpolation of vector valued functions. Applying the approach to instance-based learning, a new instance-based learning method, learning using ADB interpolation, is obtained. The learning method is of unique technique, which has also the advantages of definite mathematical basis, implicit distance weights, avoiding misclassification, high efficiency, and wide range of applications, as well as being interpretable, etc. In principle, this method is a kind of learning by analogy, which and the deep learning that belongs to inductive learning can complement each other, and for some problems, the two can even have an effect of “different approaches but equal results” in big data and cloud computing environment. Thus, the learning using ADB interpolation can also be regarded as a kind of “wide learning” that is dual to deep learning.


2016 ◽  
Vol 12 (1) ◽  
pp. 103-113 ◽  
Author(s):  
Mohammed Ibrahim ◽  
Haider AlSabbagh

A considerable work has been conducted to cope with orthogonal frequency division multiple access (OFDMA) resource allocation with using different algorithms and methods. However, most of the available studies deal with optimizing the system for one or two parameters with simple practical condition/constraints. This paper presents analyses and simulation of dynamic OFDMA resource allocation implementation with Modified Multi-Dimension Genetic Algorithm (MDGA) which is an extension for the standard algorithm. MDGA models the resource allocation problem to find the optimal or near optimal solution for both subcarrier and power allocation for OFDMA. It takes into account the power and subcarrier constrains, channel and noise distributions, distance between user's equipment (UE) and base stations (BS), user priority weight – to approximate the most effective parameters that encounter in OFDMA systems. In the same time multi dimension genetic algorithm is used to allow exploring the solution space of resource allocation problem effectively with its different evolutionary operators: multi dimension crossover, multi dimension mutation. Four important cases are addressed and analyzed for resource allocation of OFDMA system under specific operation scenarios to meet the standard specifications for different advanced communication systems. The obtained results demonstrate that MDGA is an effective algorithm in finding the optimal or near optimal solution for both of subcarrier and power allocation of OFDMA resource allocation.


2021 ◽  
Author(s):  
Shiyou Lian

Starting from finding approximate value of a function, introduces the measure of approximation-degree between two numerical values, proposes the concepts of “strict approximation” and “strict approximation region”, then, derives the corresponding one-dimensional interpolation methods and formulas, and then presents a calculation model called “sum-times-difference formula” for high-dimensional interpolation, thus develops a new interpolation approach, that is, ADB interpolation. ADB interpolation is applied to the interpolation of actual functions with satisfactory results. Viewed from principle and effect, the interpolation approach is of novel idea, and has the advantages of simple calculation, stable accuracy, facilitating parallel processing, very suiting for high-dimensional interpolation, and easy to be extended to the interpolation of vector valued functions. Applying the approach to instance-based learning, a new instance-based learning method, learning using ADB interpolation, is obtained. The learning method is of unique technique, which has also the advantages of definite mathematical basis, implicit distance weights, avoiding misclassification, high efficiency, and wide range of applications, as well as being interpretable, etc. In principle, this method is a kind of learning by analogy, which and the deep learning that belongs to inductive learning can complement each other, and for some problems, the two can even have an effect of “different approaches but equal results” in big data and cloud computing environment. Thus, the learning using ADB interpolation can also be regarded as a kind of “wide learning” that is dual to deep learning.


Sign in / Sign up

Export Citation Format

Share Document