scholarly journals Cellular Traffic Prediction Based on an Intelligent Model

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Fawaz Waselallah Alsaade ◽  
Mosleh Hmoud Al-Adhaileh

The evolution of cellular technology development has led to explosive growth in cellular network traffic. Accurate time-series models to predict cellular mobile traffic have become very important for increasing the quality of service (QoS) with a network. The modelling and forecasting of cellular network loading play an important role in achieving the greatest favourable resource allocation by convenient bandwidth provisioning and simultaneously preserve the highest network utilization. The novelty of the proposed research is to develop a model that can help intelligently predict load traffic in a cellular network. In this paper, a model that combines single-exponential smoothing with long short-term memory (SES-LSTM) is proposed to predict cellular traffic. A min-max normalization model was used to scale the network loading. The single-exponential smoothing method was applied to adjust the volumes of network traffic, due to network traffic being very complex and having different forms. The output from a single-exponential model was processed by using an LSTM model to predict the network load. The intelligent system was evaluated by using real cellular network traffic that had been collected in a kaggle dataset. The results of the experiment revealed that the proposed method had superior accuracy, achieving R-square metric values of 88.21%, 92.20%, and 89.81% for three one-month time intervals, respectively. It was observed that the prediction values were very close to the observations. A comparison of the prediction results between the existing LSTM model and our proposed system is presented. The proposed system achieved superior performance for predicting cellular network traffic.

Author(s):  
Qingtian Zeng ◽  
Qiang Sun ◽  
Geng Chen ◽  
Hua Duan

AbstractWireless cellular traffic prediction is a critical issue for researchers and practitioners in the 5G/B5G field. However, it is very challenging since the wireless cellular traffic usually shows high nonlinearities and complex patterns. Most existing wireless cellular traffic prediction methods lack the abilities of modeling the dynamic spatial–temporal correlations of wireless cellular traffic data, thus cannot yield satisfactory prediction results. In order to improve the accuracy of 5G/B5G cellular network traffic prediction, an attention-based multi-component spatiotemporal cross-domain neural network model (att-MCSTCNet) is proposed, which uses Conv-LSTM or Conv-GRU for neighbor data, daily cycle data, and weekly cycle data modeling, and then assigns different weights to the three kinds of feature data through the attention layer, improves their feature extraction ability, and suppresses the feature information that interferes with the prediction time. Finally, the model is combined with timestamp feature embedding, multiple cross-domain data fusion, and jointly with other models to assist the model in traffic prediction. Experimental results show that compared with the existing models, the prediction performance of the proposed model is better. Among them, the RMSE performance of the att-MCSTCNet (Conv-LSTM) model on Sms, Call, and Internet datasets is improved by 13.70 ~ 54.96%, 10.50 ~ 28.15%, and 35.85 ~ 100.23%, respectively, compared with other existing models. The RMSE performance of the att-MCSTCNet (Conv-GRU) model on Sms, Call, and Internet datasets is about 14.56 ~ 55.82%, 12.24 ~ 29.89%, and 38.79 ~ 103.17% higher than other existing models, respectively.


Author(s):  
Quang Thanh Tran ◽  
Li Jun Hao ◽  
Quang Khai Trinh

Wireless traffic prediction plays an important role in network planning and management, especially for real-time decision making and short-term prediction. Systems require high accuracy, low cost, and low computational complexity prediction methods. Although exponential smoothing is an effective method, there is a lack of use with cellular networks and research on data traffic. The accuracy and suitability of this method need to be evaluated using several types of traffic. Thus, this study introduces the application of exponential smoothing as a method of adaptive forecasting of cellular network traffic for cases of voice (in Erlang) and data (in megabytes or gigabytes). Simple and Error, Trend, Seasonal (ETS) methods are used for exponential smoothing. By investigating the effect of their smoothing factors in describing cellular network traffic, the accuracy of forecast using each method is evaluated. This research comprises a comprehensive analysis approach using multiple case study comparisons to determine the best fit model. Different exponential smoothing models are evaluated for various traffic types in different time scales. The experiments are implemented on real data from a commercial cellular network, which is divided into a training data part for modeling and test data part for forecasting comparison. This study found that ETS framework is not suitable for hourly voice traffic, but it provides nearly the same results with Holt–Winter’s multiplicative seasonal (HWMS) in both cases of daily voice and data traffic. HWMS is presumably encompassed by ETC framework and shows good results in all cases of traffic. Therefore, HWMS is recommended for cellular network traffic prediction due to its simplicity and high accuracy.  


Algorithms ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 20 ◽  
Author(s):  
Dehai Zhang ◽  
Linan Liu ◽  
Cheng Xie ◽  
Bing Yang ◽  
Qing Liu

With the arrival of 5G networks, cellular networks are moving in the direction of diversified, broadband, integrated, and intelligent networks. At the same time, the popularity of various smart terminals has led to an explosive growth in cellular traffic. Accurate network traffic prediction has become an important part of cellular network intelligence. In this context, this paper proposes a deep learning method for space-time modeling and prediction of cellular network communication traffic. First, we analyze the temporal and spatial characteristics of cellular network traffic from Telecom Italia. On this basis, we propose a hybrid spatiotemporal network (HSTNet), which is a deep learning method that uses convolutional neural networks to capture the spatiotemporal characteristics of communication traffic. This work adds deformable convolution to the convolution model to improve predictive performance. The time attribute is introduced as auxiliary information. An attention mechanism based on historical data for weight adjustment is proposed to improve the robustness of the module. We use the dataset of Telecom Italia to evaluate the performance of the proposed model. Experimental results show that compared with the existing statistics methods and machine learning algorithms, HSTNet significantly improved the prediction accuracy based on MAE and RMSE.


Author(s):  
Yufei Li ◽  
Xiaoyong Ma ◽  
Xiangyu Zhou ◽  
Pengzhen Cheng ◽  
Kai He ◽  
...  

Abstract Motivation Bio-entity Coreference Resolution focuses on identifying the coreferential links in biomedical texts, which is crucial to complete bio-events’ attributes and interconnect events into bio-networks. Previously, as one of the most powerful tools, deep neural network-based general domain systems are applied to the biomedical domain with domain-specific information integration. However, such methods may raise much noise due to its insufficiency of combining context and complex domain-specific information. Results In this paper, we explore how to leverage the external knowledge base in a fine-grained way to better resolve coreference by introducing a knowledge-enhanced Long Short Term Memory network (LSTM), which is more flexible to encode the knowledge information inside the LSTM. Moreover, we further propose a knowledge attention module to extract informative knowledge effectively based on contexts. The experimental results on the BioNLP and CRAFT datasets achieve state-of-the-art performance, with a gain of 7.5 F1 on BioNLP and 10.6 F1 on CRAFT. Additional experiments also demonstrate superior performance on the cross-sentence coreferences. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Kate Highnam ◽  
Domenic Puzio ◽  
Song Luo ◽  
Nicholas R. Jennings

AbstractBotnets and malware continue to avoid detection by static rule engines when using domain generation algorithms (DGAs) for callouts to unique, dynamically generated web addresses. Common DGA detection techniques fail to reliably detect DGA variants that combine random dictionary words to create domain names that closely mirror legitimate domains. To combat this, we created a novel hybrid neural network, Bilbo the “bagging” model, that analyses domains and scores the likelihood they are generated by such algorithms and therefore are potentially malicious. Bilbo is the first parallel usage of a convolutional neural network (CNN) and a long short-term memory (LSTM) network for DGA detection. Our unique architecture is found to be the most consistent in performance in terms of AUC, $$F_1$$ F 1 score, and accuracy when generalising across different dictionary DGA classification tasks compared to current state-of-the-art deep learning architectures. We validate using reverse-engineered dictionary DGA domains and detail our real-time implementation strategy for scoring real-world network logs within a large enterprise. In 4 h of actual network traffic, the model discovered at least five potential command-and-control networks that commercial vendor tools did not flag.


Author(s):  
Sophia Bano ◽  
Francisco Vasconcelos ◽  
Emmanuel Vander Poorten ◽  
Tom Vercauteren ◽  
Sebastien Ourselin ◽  
...  

Abstract Purpose Fetoscopic laser photocoagulation is a minimally invasive surgery for the treatment of twin-to-twin transfusion syndrome (TTTS). By using a lens/fibre-optic scope, inserted into the amniotic cavity, the abnormal placental vascular anastomoses are identified and ablated to regulate blood flow to both fetuses. Limited field-of-view, occlusions due to fetus presence and low visibility make it difficult to identify all vascular anastomoses. Automatic computer-assisted techniques may provide better understanding of the anatomical structure during surgery for risk-free laser photocoagulation and may facilitate in improving mosaics from fetoscopic videos. Methods We propose FetNet, a combined convolutional neural network (CNN) and long short-term memory (LSTM) recurrent neural network architecture for the spatio-temporal identification of fetoscopic events. We adapt an existing CNN architecture for spatial feature extraction and integrated it with the LSTM network for end-to-end spatio-temporal inference. We introduce differential learning rates during the model training to effectively utilising the pre-trained CNN weights. This may support computer-assisted interventions (CAI) during fetoscopic laser photocoagulation. Results We perform quantitative evaluation of our method using 7 in vivo fetoscopic videos captured from different human TTTS cases. The total duration of these videos was 5551 s (138,780 frames). To test the robustness of the proposed approach, we perform 7-fold cross-validation where each video is treated as a hold-out or test set and training is performed using the remaining videos. Conclusion FetNet achieved superior performance compared to the existing CNN-based methods and provided improved inference because of the spatio-temporal information modelling. Online testing of FetNet, using a Tesla V100-DGXS-32GB GPU, achieved a frame rate of 114 fps. These results show that our method could potentially provide a real-time solution for CAI and automating occlusion and photocoagulation identification during fetoscopic procedures.


Sign in / Sign up

Export Citation Format

Share Document