scholarly journals A Long-Term Cost-Oriented Cloudlet Planning Method in Wireless Metropolitan Area Networks

Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1213
Author(s):  
Xinjie Guan ◽  
Xili Wan ◽  
Tianjing Wang ◽  
Yifeng Li

As an extension of remote cloud data centers, cloudlets process the workloads from mobile users at the network edge, thereby satisfying the requirements of resource-intensive and latency-sensitive applications. One of the fundamental yet important issues for cloudlet infrastructure providers (ISP) is how to plan the placement and capacities of cloudlets so that minimize their long-term cost with a guarantee on service delay. However, existing work mostly focuses on resource provision or resource management for mobile services on existing cloudlets, while very little attention has been paid to the cloudlet placement and capacity planning problem. In contrast to those studies, we aim to optimize the long-term total cost of cloudlets’ ISPs through intelligently planning the location and capacities of cloudlets under constraints on the service delay experienced by mobile users. This problem is then decomposed into two sub-problems and algorithms are devised to solve it. Evaluations on randomly generated traces and real traces exhibit the superior performance of the proposed solution on saving ISP’s long-term cost.

Sensors ◽  
2018 ◽  
Vol 19 (1) ◽  
pp. 32 ◽  
Author(s):  
Feng Zeng ◽  
Yongzheng Ren ◽  
Xiaoheng Deng ◽  
Wenjia Li

Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.


Author(s):  
Lei Chen ◽  
Cihan Varol ◽  
Qingzhong Liu ◽  
Bing Zhou

Thanks to the much larger geographical coverage and pleasing bandwidth of data transmissions, Wireless Metropolitan Area Networks (WMANs) have become widely accepted in many countries for everyday communications. Two of the main wireless technologies used in WMANs, the Worldwide Interoperability for Microwave Access (WiMAX, also known as Wireless Local Loop or WLL) and Long Term Evolution (LTE), have generated billions of dollars in the ever-growing wireless communication market. While the IEEE 802.16 standards for WiMAX and the 3GPP standards LTE are updated and improved almost annually, it is inevitable that current standards still contain a number of security vulnerabilities, potentially leading to various security attacks. To address the security concerns in these two WMANs technologies, this chapter presents the technical details of security aspects of WiMAX and LTE. More specifically, the key generation, authentication, data, and key confidentiality and integrity of both technologies are deliberated. The chapter ends with a discussion of the security vulnerabilities, threats, and countermeasures of WiMAX and LTE.


2008 ◽  
Author(s):  
George S. Yip ◽  
Timothy M. Devinney ◽  
Gerry Johnson
Keyword(s):  

2017 ◽  
Vol 26 (1) ◽  
pp. 113-128
Author(s):  
Gamal Eldin I. Selim ◽  
Mohamed A. El-Rashidy ◽  
Nawal A. El-Fishawy

2020 ◽  
Vol 10 (5) ◽  
pp. 1557
Author(s):  
Weijia Feng ◽  
Xiaohui Li

Ultra-dense and highly heterogeneous network (HetNet) deployments make the allocation of limited wireless resources among ubiquitous Internet of Things (IoT) devices an unprecedented challenge in 5G and beyond (B5G) networks. The interactions among mobile users and HetNets remain to be analyzed, where mobile users choose optimal networks to access and the HetNets adopt proper methods for allocating their own network resource. Existing works always need complete information among mobile users and HetNets. However, it is not practical in a realistic situation where important individual information is protected and will not be public to others. This paper proposes a distributed pricing and resource allocation scheme based on a Stackelberg game with incomplete information. The proposed model proves to be more practical by solving the problem that important information of either mobile users or HetNets is difficult to acquire during the resource allocation process. Considering the unknowability of channel gain information, the follower game among users is modeled as an incomplete information game, and channel gain is regarded as the type of each player. Given the pricing strategies of networks, users will adjust their bandwidth requesting strategies to maximize their expected utility. While based on the sub-equilibrium obtained in the follower game, networks will correspondingly update their pricing strategies to be optimal. The existence and uniqueness of Bayesian Nash equilibrium is proved. A probabilistic prediction method realizes the feasibility of the incomplete information game, and a reverse deduction method is utilized to obtain the game equilibrium. Simulation results show the superior performance of the proposed method.


2015 ◽  
Vol 3 (3) ◽  
pp. 290-303 ◽  
Author(s):  
David Candeia ◽  
Ricardo Araujo Santos ◽  
Raquel Lopes
Keyword(s):  

2021 ◽  
Vol 11 (9) ◽  
pp. 3870
Author(s):  
Jeongsu Kim ◽  
Kyungwoon Lee ◽  
Gyeongsik Yang ◽  
Kwanhoon Lee ◽  
Jaemin Im ◽  
...  

This paper investigates the performance interference of blockchain services that run on cloud data centers. As the data centers offer shared computing resources to multiple services, the blockchain services can experience performance interference due to the co-located services. We explore the impact of the interference on Fabric performance and develop a new technique to offer performance isolation for Hyperledger Fabric, the most popular blockchain platform. First, we analyze the characteristics of the different components in Hyperledger Fabric and show that Fabric components have different impacts on the performance of Fabric. Then, we present QiOi, component-level performance isolation technique for Hyperledger Fabric. The key idea of QiOi is to dynamically control the CPU scheduling of Fabric components to cope with the performance interference. We implement QiOi as a user-level daemon and evaluate how QiOi mitigates the performance interference of Fabric. The evaluation results demonstrate that QiOi mitigates performance degradation of Fabric by 22% and improves Fabric latency by 2.5 times without sacrificing the performance of co-located services. In addition, we show that QiOi can support different ordering services and chaincodes with negligible overhead to Fabric performance.


2021 ◽  
Vol 13 (15) ◽  
pp. 2938
Author(s):  
Feng Li ◽  
Haihong Zhu ◽  
Zhenwei Luo ◽  
Hang Shen ◽  
Lin Li

Separating point clouds into ground and nonground points is an essential step in the processing of airborne laser scanning (ALS) data for various applications. Interpolation-based filtering algorithms have been commonly used for filtering ALS point cloud data. However, most conventional interpolation-based algorithms have exhibited a drawback in terms of retaining abrupt terrain characteristics, resulting in poor algorithmic precision in these regions. To overcome this drawback, this paper proposes an improved adaptive surface interpolation filter with a multilevel hierarchy by using a cloth simulation and relief amplitude. This method uses three hierarchy levels of provisional digital elevation model (DEM) raster surfaces with thin plate spline (TPS) interpolation to separate ground points from unclassified points based on adaptive residual thresholds. A cloth simulation algorithm is adopted to generate sufficient effective initial ground seeds for constructing topographic surfaces with high quality. Residual thresholds are adaptively constructed by the relief amplitude of the examined area to capture complex landscape characteristics during the classification process. Fifteen samples from the International Society for Photogrammetry and Remote Sensing (ISPRS) commission are used to assess the performance of the proposed algorithm. The experimental results indicate that the proposed method can produce satisfying results in both flat areas and steep areas. In a comparison with other approaches, this method demonstrates its superior performance in terms of filtering results with the lowest omission error rate; in particular, the proposed approach retains discontinuous terrain features with steep slopes and terraces.


Sign in / Sign up

Export Citation Format

Share Document