scholarly journals Latency-Aware Computation Offloading for 5G Networks in Edge Computing

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Xianwei Li ◽  
Baoliu Ye

With the development of Internet of Things, massive computation-intensive tasks are generated by mobile devices whose limited computing and storage capacity lead to poor quality of services. Edge computing, as an effective computing paradigm, was proposed for efficient and real-time data processing by providing computing resources at the edge of the network. The deployment of 5G promises to speed up data transmission but also further increases the tasks to be offloaded. However, how to transfer the data or tasks to the edge servers in 5G for processing with high response efficiency remains a challenge. In this paper, a latency-aware computation offloading method in 5G networks is proposed. Firstly, the latency and energy consumption models of edge computation offloading in 5G are defined. Then the fine-grained computation offloading method is employed to reduce the overall completion time of the tasks. The approach is further extended to solve the multiuser computation offloading problem. To verify the effectiveness of the proposed method, extensive simulation experiments are conducted. The results show that the proposed offloading method can effectively reduce the execution latency of the tasks.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2628
Author(s):  
Mengxing Huang ◽  
Qianhao Zhai ◽  
Yinjie Chen ◽  
Siling Feng ◽  
Feng Shu

Computation offloading is one of the most important problems in edge computing. Devices can transmit computation tasks to servers to be executed through computation offloading. However, not all the computation tasks can be offloaded to servers with the limitation of network conditions. Therefore, it is very important to decide quickly how many tasks should be executed on servers and how many should be executed locally. Only computation tasks that are properly offloaded can improve the Quality of Service (QoS). Some existing methods only focus on a single objection, and of the others some have high computational complexity. There still have no method that could balance the targets and complexity for universal application. In this study, a Multi-Objective Whale Optimization Algorithm (MOWOA) based on time and energy consumption is proposed to solve the optimal offloading mechanism of computation offloading in mobile edge computing. It is the first time that MOWOA has been applied in this area. For improving the quality of the solution set, crowding degrees are introduced and all solutions are sorted by crowding degrees. Additionally, an improved MOWOA (MOWOA2) by using the gravity reference point method is proposed to obtain better diversity of the solution set. Compared with some typical approaches, such as the Grid-Based Evolutionary Algorithm (GrEA), Cluster-Gradient-based Artificial Immune System Algorithm (CGbAIS), Non-dominated Sorting Genetic Algorithm III (NSGA-III), etc., the MOWOA2 performs better in terms of the quality of the final solutions.


2019 ◽  
Vol 10 (1) ◽  
pp. 203 ◽  
Author(s):  
Luan N. T. Huynh ◽  
Quoc-Viet Pham ◽  
Xuan-Qui Pham ◽  
Tri D. T. Nguyen ◽  
Md Delowar Hossain ◽  
...  

In recent years, multi-access edge computing (MEC) has become a promising technology used in 5G networks based on its ability to offload computational tasks from mobile devices (MDs) to edge servers in order to address MD-specific limitations. Despite considerable research on computation offloading in 5G networks, this activity in multi-tier multi-MEC server systems continues to attract attention. Here, we investigated a two-tier computation-offloading strategy for multi-user multi-MEC servers in heterogeneous networks. For this scenario, we formulated a joint resource-allocation and computation-offloading decision strategy to minimize the total computing overhead of MDs, including completion time and energy consumption. The optimization problem was formulated as a mixed-integer nonlinear program problem of NP-hard complexity. Under complex optimization and various application constraints, we divided the original problem into two subproblems: decisions of resource allocation and computation offloading. We developed an efficient, low-complexity algorithm using particle swarm optimization capable of high-quality solutions and guaranteed convergence, with a high-level heuristic (i.e., meta-heuristic) that performed well at solving a challenging optimization problem. Simulation results indicated that the proposed algorithm significantly reduced the total computing overhead of MDs relative to several baseline methods while guaranteeing to converge to stable solutions.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1446 ◽  
Author(s):  
Liang Huang ◽  
Xu Feng ◽  
Luxin Zhang ◽  
Liping Qian ◽  
Yuan Wu

This paper studies mobile edge computing (MEC) networks where multiple wireless devices (WDs) offload their computation tasks to multiple edge servers and one cloud server. Considering different real-time computation tasks at different WDs, every task is decided to be processed locally at its WD or to be offloaded to and processed at one of the edge servers or the cloud server. In this paper, we investigate low-complexity computation offloading policies to guarantee quality of service of the MEC network and to minimize WDs’ energy consumption. Specifically, both a linear programing relaxation-based (LR-based) algorithm and a distributed deep learning-based offloading (DDLO) algorithm are independently studied for MEC networks. We further propose a heterogeneous DDLO to achieve better convergence performance than DDLO. Extensive numerical results show that the DDLO algorithms guarantee better performance than the LR-based algorithm. Furthermore, the DDLO algorithm generates an offloading decision in less than 1 millisecond, which is several orders faster than the LR-based algorithm.


2019 ◽  
Vol 26 (1) ◽  
pp. 146-169
Author(s):  
Ruslan L. Smeliansky

The computing paradigm based on the giant-like DC is replaced by a new paradigm. The urgency of this shift is caused by the requirements of new applications that actively use video, real-time interactivity, new mobile communication technologies, which today cannot be implemented without the usage of cloud computing and virtualization based on SDN&NFV technologies. The presentation considers the requirements dictated by these applications, outlines the architecture of this new paradigm which we call Hierarchical Edge Computing (HEC). Attention is focused on the fact that all these applications are distributed, become more and more real-time applications and require guaranteed quality of service in the networking operation. The main scientific problems that need to be solved for implementing this new paradigm are discussed.


Author(s):  
Amin Ebrahimzadeh ◽  
Martin Maier

Next generation optical access networks have to cope with the contradiction between the intense computation and ultra-low latency requirements of the immersive applications and limited resources of smart mobile devices. In this chapter, after presenting a brief overview of the related work on multi-access edge computing (MEC), the authors explore the potential of full and partial decentralization of computation by leveraging mobile end-user equipment in an MEC-enabled FiWi-enhanced LTE-A HetNet, by designing a two-tier hierarchical MEC-enabled FiWi-enhanced HetNet-based architecture for computation offloading, which leverages both local (i.e., on-device) and nonlocal (i.e., MEC/cloud-assisted) computing resources to achieve low response time and energy consumption for mobile users. They also propose a simple yet efficient task offloading mechanism to achieve an improved quality of experience (QoE) for mobile users.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Jianguo Sun ◽  
Yang Yang ◽  
Zechao Liu ◽  
Yuqing Qiao

Currently, the Internet of Things (IoT) provides individuals with real-time data processing and efficient data transmission services, relying on extensive edge infrastructures. However, those infrastructures may disclose sensitive information of consumers without authorization, which makes data access control to be widely researched. Ciphertext-policy attribute-based encryption (CP-ABE) is regarded as an effective cryptography tool for providing users with a fine-grained access policy. In prior ABE schemes, the attribute universe is only managed by a single trusted central authority (CA), which leads to a reduction in security and efficiency. In addition, all attributes are considered equally important in the access policy. Consequently, the access policy cannot be expressed flexibly. In this paper, we propose two schemes with a new form of encryption named multi-authority criteria-based encryption (CE) scheme. In this context, the schemes express each criterion as a polynomial and have a weight on it. Unlike ABE schemes, the decryption will succeed if and only if a user satisfies the access policy and the weight exceeds the threshold. The proposed schemes are proved to be secure under the decisional bilinear Diffie–Hellman exponent assumption (q-BDHE) in the standard model. Finally, we provide an implementation of our works, and the simulation results indicate that our schemes are highly efficient.


2021 ◽  
Vol 21 (4) ◽  
pp. 1-20
Author(s):  
Zhihan Lv ◽  
Liang Qiao ◽  
Sahil Verma ◽  
Kavita

As deep learning, virtual reality, and other technologies become mature, real-time data processing applications running on intelligent terminals are emerging endlessly; meanwhile, edge computing has developed rapidly and has become a popular research direction in the field of distributed computing. Edge computing network is a network computing environment composed of multi-edge computing nodes and data centers. First, the edge computing framework and key technologies are analyzed to improve the performance of real-time data processing applications. In the system scenario where the collaborative deployment tasks of multi-edge nodes and data centers are considered, the stream processing task deployment process is formally described, and an efficient multi-edge node-computing center collaborative task deployment algorithm is proposed, which solves the problem of copy-free task deployment in the task deployment problem. Furthermore, a heterogeneous edge collaborative storage mechanism with tight coupling of computing and data is proposed, which solves the contradiction between the limited computing and storage capabilities of data and intelligent terminals, thereby improving the performance of data processing applications. Here, a Feasible Solution (FS) algorithm is designed to solve the problem of placing copy-free data processing tasks in the system. The FS algorithm has excellent results once considering the overall coordination. Under light load, the V value is reduced by 73% compared to the Only Data Center-available (ODC) algorithm and 41% compared to the Hash algorithm. Under heavy load, the V value is reduced by 66% compared to the ODC algorithm and 35% compared to the Hash algorithm. The algorithm has achieved good results after considering the overall coordination and cooperation and can more effectively use the bandwidth of edge nodes to transmit and process data stream, so that more tasks can be deployed in edge computing nodes, thereby saving time for data transmission to the data centers. The end-to-end collaborative real-time data processing task scheduling mechanism proposed here can effectively avoid the disadvantages of long waiting times and unable to obtain the required data, which significantly improves the success rate of the task and thus ensures the performance of real-time data processing.


Author(s):  
Jamuna S. Murthy

In the recent years, edge/fog computing is gaining greater importance and has led to the deployment of many smart devices and application frameworks which support real-time data processing. Edge computing is an extension to existing cloud computing environment and focuses on improving the reliability, scalability, and resource efficiency of cloud by abolishing the need for processing all the data at one time and thus increasing the bandwidth of a network. Edge computing can complement cloud computing in a way leading to a novel architecture which can benefit from both edge and cloud resources. This kind of resource architecture may require resource continuity provided that the selection of resources for executing a service in cloud is independent of physical location. Hence, this research work proposes a novel architecture called “EdgeCloud,” which is a distributed management system for resource continuity in edge to cloud computing environment. The performance of the system is evaluated by considering a traffic management service example mapped into the proposed layered framework.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 779
Author(s):  
Shichao Chen ◽  
Qijie Li ◽  
Mengchu Zhou ◽  
Abdullah Abusorrah

In edge computing, edge devices can offload their overloaded computing tasks to an edge server. This can give full play to an edge server’s advantages in computing and storage, and efficiently execute computing tasks. However, if they together offload all the overloaded computing tasks to an edge server, it can be overloaded, thereby resulting in the high processing delay of many computing tasks and unexpectedly high energy consumption. On the other hand, the resources in idle edge devices may be wasted and resource-rich cloud centers may be underutilized. Therefore, it is essential to explore a computing task collaborative scheduling mechanism with an edge server, a cloud center and edge devices according to task characteristics, optimization objectives and system status. It can help one realize efficient collaborative scheduling and precise execution of all computing tasks. This work analyzes and summarizes the edge computing scenarios in an edge computing paradigm. It then classifies the computing tasks in edge computing scenarios. Next, it formulates the optimization problem of computation offloading for an edge computing system. According to the problem formulation, the collaborative scheduling methods of computing tasks are then reviewed. Finally, future research issues for advanced collaborative scheduling in the context of edge computing are indicated.


Author(s):  
Klervie Toczé ◽  
Johan Lindqvist ◽  
Simin Nadjm-Tehrani

AbstractThe edge computing paradigm comes with a promise of lower application latency compared to the cloud. Moreover, offloading user device computations to the edge enables running demanding applications on resource-constrained mobile end devices. However, there is a lack of workload models specific to edge offloading using applications as their basis.In this work, we build upon the reconfigurable open-source mixed reality (MR) framework MR-Leo as a vehicle to study resource utilisation and quality of service for a time-critical mobile application that would have to rely on the edge to be widely deployed. We perform experiments to aid estimating the resource footprint and the generated load by MR-Leo, and propose an application model and a statistical workload model for it. The idea is that such empirically-driven models can be the basis of evaluations of edge algorithms within simulation or analytical studies.A comparison with a workload model used in a recent work shows that the computational demand of MR-Leo exhibits very different characteristics from those assumed for MR applications earlier.


Sign in / Sign up

Export Citation Format

Share Document