scholarly journals Stability and Delay of NDMA-MPR Protocol in Rice-Correlated Channels with Co-Channel Interference

Technologies ◽  
2019 ◽  
Vol 7 (1) ◽  
pp. 22
Author(s):  
Ramiro Sámano-Robles

This paper investigates backlog retransmission strategies for a class of random access protocols with retransmission diversity (i.e., network diversity multiple access or NDMA) combined with multiple-antenna-based multi-packet reception (MPR). This paper proposes NDMA-MPR as a candidate for 5G contention-based and ultra-low latency multiple access. This proposal is based on the following known features of NDMA-MPR: (1) near collision-free performance, (2) very low latency values, and (3) reduced feedback complexity (binary feedback). These features match the machine-type traffic, real-time, and dense object connectivity requirements in 5G. This work is an extension of previous works using a multiple antenna receiver with correlated Rice channels and co-channel interference modelled as a Rayleigh fading variable. Two backlog retransmission strategies are implemented: persistent and randomized. Boundaries and extended analysis of the system are here obtained for different network and channel conditions. Average delay is evaluated using the M/G/1 queue model with statistically independent vacations. The results suggest that NDMA-MPR can achieve very low values of latency that can guarantee real- or near-real-time performance for multiple access in 5G, even in scenarios with high correlation and moderate co-channel interference.

Author(s):  
Artem Burkov ◽  
Seva Shneer ◽  
Andrey Turlikov

Introduction: Currently, the first versions of 5G communication standard networks are being deployed and discussions are underway on the further development of cellular networks and the transition to the 6G standard. The work of the currently popular idea of ​​the Internet of Things (IoT) is supposed to be in the framework of a Massive Machine-Type Communications scenario, which has a number of requirements for operation characteristics: very high energy efficiency, relatively low delay and fairly reliable communication. It is assumed that random multiple access procedures are used, since, due to the nature of the traffic, it is impossible to develop a channel resource sharing policy. To increase the efficiency of random access, a class of unblocked algorithms using orthogonal preambles can be used. Purpose: to calculate the lower bound of the average delay for the class of unblocked random multiple access algorithms using orthogonal preambles. Methods: system analysis, a theory of random processes, queuing theory, and simulation. Results: A model of a system with a potentially unlimited number of users who use random unblocked access to transmit data over a common communication channel using orthogonal preambles is proposed. A closed expression is obtained for calculating the lower bound of the average delay in such a system depending on the intensity of the input arrival rate. The limit value of the intensity of the input arrival rate to which the system operates stably is determined. Shown are the results of simulation with respect to the obtained bound. Practical relevance: the obtained boundary allows us to estimate the lower average delay in the described class of algorithms. Its application allows us to determine the possibility of using the considered class of algorithms from the point of view of limitations on the average delay at the stage of designing random multiple access systems.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Ramiro Samano-Robles ◽  
Atilio Gameiro

In NDMA (network diversity multiple access), protocol-controlled retransmissions are used to create a virtual MIMO (multiple-input multiple-output) system, where collisions can be resolved via source separation. By using this retransmission diversity approach for collision resolution, NDMA is the family of random access protocols with the highest potential throughput. However, several issues remain open today in the modeling and design of this type of protocol, particularly in terms of dynamic stable performance and backlog delay. This paper attempts to partially fill this gap by proposing a Markov model for the study of the dynamic-stable performance of a symmetrical and non-blind NDMA protocol assisted by a multiple-antenna receiver. The model is useful in the study of stability aspects in terms of the backlog-user distribution and average backlog delay. It also allows for the investigation of the different states of the system and the transition probabilities between them. Unlike previous works, the proposed approach considers the imperfect estimation of the collision multiplicity, which is a crucial process to the performance of NDMA. The results suggest that NDMA improves not only the throughput performance over previous solutions, but also the average number of backlogged users, the average backlog delay and, in general, the stability of random access protocols. It is also shown that when multiuser detection conditions degrade, ALOHA-type backlog retransmission becomes relevant to the stable operation of NDMA.


Author(s):  
Jahwan Koo ◽  
Nawab Muhammad Faseeh Qureshi ◽  
Isma Farah Siddiqui ◽  
Asad Abbas ◽  
Ali Kashif Bashir

Abstract Real-time data streaming fetches live sensory segments of the dataset in the heterogeneous distributed computing environment. This process assembles data chunks at a rapid encapsulation rate through a streaming technique that bundles sensor segments into multiple micro-batches and extracts into a repository, respectively. Recently, the acquisition process is enhanced with an additional feature of exchanging IoT devices’ dataset comprised of two components: (i) sensory data and (ii) metadata. The body of sensory data includes record information, and the metadata part consists of logs, heterogeneous events, and routing path tables to transmit micro-batch streams into the repository. Real-time acquisition procedure uses the Directed Acyclic Graph (DAG) to extract live query outcomes from in-place micro-batches through MapReduce stages and returns a result set. However, few bottlenecks affect the performance during the execution process, such as (i) homogeneous micro-batches formation only, (ii) complexity of dataset diversification, (iii) heterogeneous data tuples processing, and (iv) linear DAG workflow only. As a result, it produces huge processing latency and the additional cost of extracting event-enabled IoT datasets. Thus, the Spark cluster that processes Resilient Distributed Dataset (RDD) in a fast-pace using Random access memory (RAM) defies expected robustness in processing IoT streams in the distributed computing environment. This paper presents an IoT-enabled Directed Acyclic Graph (I-DAG) technique that labels micro-batches at the stage of building a stream event and arranges stream elements with event labels. In the next step, heterogeneous stream events are processed through the I-DAG workflow, which has non-linear DAG operation for extracting queries’ results in a Spark cluster. The performance evaluation shows that I-DAG resolves homogeneous IoT-enabled stream event issues and provides an effective stream event heterogeneous solution for IoT-enabled datasets in spark clusters.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3715
Author(s):  
Ioan Ungurean ◽  
Nicoleta Cristina Gaitan

In the design and development process of fog computing solutions for the Industrial Internet of Things (IIoT), we need to take into consideration the characteristics of the industrial environment that must be met. These include low latency, predictability, response time, and operating with hard real-time compiling. A starting point may be the reference fog architecture released by the OpenFog Consortium (now part of the Industrial Internet Consortium), but it has a high abstraction level and does not define how to integrate the fieldbuses and devices into the fog system. Therefore, the biggest challenges in the design and implementation of fog solutions for IIoT is the diversity of fieldbuses and devices used in the industrial field and ensuring compliance with all constraints in terms of real-time compiling, low latency, and predictability. Thus, this paper proposes a solution for a fog node that addresses these issues and integrates industrial fieldbuses. For practical implementation, there are specialized systems on chips (SoCs) that provides support for real-time communication with the fieldbuses through specialized coprocessors and peripherals. In this paper, we describe the implementation of the fog node on a system based on Xilinx Zynq UltraScale+ MPSoC ZU3EG A484 SoC.


Author(s):  
Olivier Jaubert ◽  
Javier Montalt‐Tordera ◽  
Dan Knight ◽  
Gerry J. Coghlan ◽  
Simon Arridge ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 689
Author(s):  
Tom Springer ◽  
Elia Eiroa-Lledo ◽  
Elizabeth Stevens ◽  
Erik Linstead

As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.


ICT Express ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 41-48
Author(s):  
Eunkyung Kim ◽  
Heesoo Lee

2020 ◽  
Vol 66 (11) ◽  
pp. 6688-6722
Author(s):  
Shuqing Chen ◽  
Michelle Effros ◽  
Victoria Kostina

Sign in / Sign up

Export Citation Format

Share Document