scholarly journals A Network Slicing Framework for UAV-Aided Vehicular Networks

Drones ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 70
Author(s):  
Emmanouil Skondras ◽  
Emmanouel T. Michailidis ◽  
Angelos Michalas ◽  
Dimitrios J. Vergados ◽  
Nikolaos I. Miridakis ◽  
...  

In a fifth generation (5G) vehicular network architecture, several point of access (PoA) types, including both road side units (RSUs) and aerial relay nodes (ARNs), can be leveraged to undertake the service of an increasing number of vehicular users. In such an architecture, the application of efficient resource allocation schemes is indispensable. In this direction, this paper describes a network slicing scheme for 5G vehicular networks that aims to optimize the performance of modern network services. The proposed architecture consists of ground RSUs and unmanned aerial vehicles (UAVs) acting as ARNs enabling the communication between ground vehicular nodes and providing additional communication resources. Both RSUs and ARNs implement the LTE vehicle-to-everything (LTE-V2X) technology, while the position of each ARN is optimized by applying a fuzzy multi-attribute decision-making (fuzzy MADM) technique. With regard to the proposed network architecture, each RSU maintains a local virtual resource pool (LVRP) which contains local RBs (LRBs) and shared RBs (SRBs), while an SDN controller maintains a virtual resource pool (VRP), where the SRBs of the RSUs are stored. In addition, each ARN maintains its own resource blocks (RBs). For users connected to the RSUs, if the remaining RBs of the current RSU can satisfy the predefined threshold value, the LRBs of the RSU are allocated to user services. On the contrary, if the remaining RBs of the current RSU cannot satisfy the threshold, extra RBs from the VRP are allocated to user services. Similarly, for users connected to ARNs, the satisfaction grade of each user service is monitored considering both the QoS and the signal-to-noise plus interference (SINR) factors. If the satisfaction grade is higher than the predefined threshold value, the service requirements can be satisfied by the remaining RBs of the ARN. On the contrary, if the estimated satisfaction grade is lower than the predefined threshold value, the ARN borrows extra RBs from the LVRP of the corresponding RSU to achieve the required satisfaction grade. Performance evaluation shows that the suggested method optimizes the resource allocation and improves the performance of the offered services in terms of throughput, packet transfer delay, jitter and packet loss ratio, since the use of ARNs that obtain optimal positions improves the channel conditions observed from each vehicular user.

2020 ◽  
Vol 12 (17) ◽  
pp. 2670
Author(s):  
Maria Aspri ◽  
Grigorios Tsagkatakis ◽  
Panagiotis Tsakalides

Deep Neural Networks (DNNs) have established themselves as a fundamental tool in numerous computational modeling applications, overcoming the challenge of defining use-case-specific feature extraction processing by incorporating this stage into unified end-to-end trainable models. Despite their capabilities in modeling, training large-scale DNN models is a very computation-intensive task that most single machines are often incapable of accomplishing. To address this issue, different parallelization schemes were proposed. Nevertheless, network overheads as well as optimal resource allocation pose as major challenges, since network communication is generally slower than intra-machine communication while some layers are more computationally expensive than others. In this work, we consider a novel multimodal DNN based on the Convolutional Neural Network architecture and explore several different ways to optimize its performance when training is executed on an Apache Spark Cluster. We evaluate the performance of different architectures via the metrics of network traffic and processing power, considering the case of land cover classification from remote sensing observations. Furthermore, we compare our architectures with an identical DNN architecture modeled after a data parallelization approach by using the metrics of classification accuracy and inference execution time. The experiments show that the way a model is parallelized has tremendous effect on resource allocation and hyperparameter tuning can reduce network overheads. Experimental results also demonstrate that proposed model parallelization schemes achieve more efficient resource use and more accurate predictions compared to data parallelization approaches.


Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1830 ◽  
Author(s):  
Anum Ali ◽  
Ghalib A. Shah ◽  
Junaid Arshad

Resource allocation for machine-type communication (MTC) devices is one of the keys challenges in the 5G network as it affects the lifetime of battery powered devices and also the quality of service of the applications. MTC devices are battery restrained and cannot afford a lot of power consumption due to spectrum usage. In this paper, we propose a novel resource allocation algorithm termed threshold controlled access (TCA) protocol. We propose a novel technique of uplink resource allocation in which the devices make a decision of resource allocation blocks based on their battery status and related application’s power profile that eventually leads to required quality of service (QoS) metric. The first phase of the TCA algorithm selects the number of carriers to be allocated to a certain device for the better lifetime of low power MTC devices. In the second phase, the efficient solution is implemented through inducing a threshold value. A certain value of the threshold is selected through a mapping based on a QoS metric. The threshold enhances the selection of subcarriers for less powered devices, such as small e-health sensors. The algorithm is simulated for the physical layer of the 5G network. Simulation results show that the proposed algorithm is less complex and achieves better performance when compared to existing solutions in the literature.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yaping Cui ◽  
Xinyun Huang ◽  
Dapeng Wu ◽  
Hao Zheng

The diversified service requirements in vehicular networks have stimulated the investigation to develop suitable technologies to satisfy the demands of vehicles. In this context, network slicing has been considered as one of the most promising architectural techniques to cater to the various strict service requirements. However, the unpredictability of the service traffic of each slice caused by the complex communication environments leads to a weak utilization of the allocated slicing resources. Thus, in this paper, we use Long Short-Term Memory- (LSTM-) based resource allocation to reduce the total system delay. Specially, we first formulated the radio resource allocation problem as a convex optimization problem to minimize system delay. Secondly, to further reduce delay, we design a Convolutional LSTM- (ConvLSTM-) based traffic prediction to predict traffic of complex slice services in vehicular networks, which is used in the resource allocation processing. And three types of traffic are considered, that is, SMS, phone, and web traffic. Finally, based on the predicted results, i.e., the traffic of each slice and user load distribution, we exploit the primal-dual interior-point method to explore the optimal slice weight of resources. Numerical results show that the average error rates of predicted SMS, phone, and web traffic are 25.0%, 12.4%, and 12.2%, respectively, and the total delay is significantly reduced, which verifies the accuracy of the traffic prediction and the effectiveness of the proposed strategy.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Lubna Nadeem ◽  
Yasar Amin ◽  
Jonathan Loo ◽  
Muhammad A. Azam ◽  
Kok KEONG CHAI

Sign in / Sign up

Export Citation Format

Share Document