A scheduling algorithm for autonomous driving tasks on mobile edge computing servers

2019 ◽  
Vol 94 ◽  
pp. 14-23 ◽  
Author(s):  
Hongjun Dai ◽  
Xiangyu Zeng ◽  
Zhilou Yu ◽  
Tingting Wang
Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1221 ◽  
Author(s):  
Liu ◽  
Chen ◽  
Wu ◽  
Deng ◽  
Liu ◽  
...  

With the rapid development of various new types of services, autonomous driving has received extensive attention. Due to the dense traffic flow, the limited battery life and computing power of the vehicles, intelligent vehicles are unable to support some computationally intensive and urgent tasks. Autonomous driving imposes strict requirements on the response time of the task. Due to the strong computing power and proximity to the terminal of mobile edge computing (MEC) and the arrival of 5G, the task can be unloaded to MEC, and data can be exchanged in milliseconds, which can reduce the task execution time. However, the resources of the MEC server are still very limited. Therefore we proposed a scheduling algorithm that takes into account the special task of the autopilot. Tasks will select the appropriate edge cloud execution and schedule the execution sequence on the edge cloud by the scheduling algorithm. At the same time, we take the mobility of high-speed vehicles into consideration. The position of the vehicle can be obtained by the prediction algorithm, and the task results are returned to the vehicle by means of other edge clouds. The experimental results show that with the increase of the task amount, the algorithm can effectively schedule more tasks to be completed within the specified time, and in different time slots; it can also predict the location of the vehicle and return the result to the vehicle.


2020 ◽  
Author(s):  
João Luiz Grave Gross ◽  
Cláudio Fernando Fernando Resin Geyer

In a scenario with increasingly mobile devices connected to the Internet, data-intensive applications and energy consumption limited by battery capacity, we propose a cost minimization model for IoT devices in a Mobile Edge Computing (MEC) architecture with the main objective of reducing total energy consumption and total elapsed times from task creation to conclusion. The cost model is implemented using the TEMS (Time and Energy Minimization Scheduler) scheduling algorithm and validated with simulation. The results show that it is possible to reduce the energy consumed in the system by up to 51.61% and the total elapsed time by up to 86.65% in the simulated cases with the parameters and characteristics defined in each experiment.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Run Yang ◽  
Hui He ◽  
Weizhe Zhang

Mobile edge computing (MEC) pushes computing resources to the edge of the network and distributes them at the edge of the mobile network. Offloading computing tasks to the edge instead of the cloud can reduce computing latency and backhaul load simultaneously. However, new challenges incurred by user mobility and limited coverage of MEC server service arise. Services should be dynamically migrated between multiple MEC servers to maintain service performance due to user movement. Tackling this problem is nontrivial because it is arduous to predict user movement, and service migration will generate service interruptions and redundant network traffic. Service interruption time must be minimized, and redundant network traffic should be reduced to ensure service quality. In this paper, the container live migration technology based on prediction is studied, and an online prediction method based on map data that does not rely on prior knowledge such as user trajectories is proposed to address this challenge in terms of mobility prediction accuracy. A multitier framework and scheduling algorithm are designed to select MEC servers according to moving speeds of users and latency requirements of offloading tasks to reduce redundant network traffic. Based on the map of Beijing, extensive experiments are conducted using simulation platforms and real-world data trace. Experimental results show that our online prediction methods perform better than the common strategy. Our system reduces network traffic by 65% while meeting task delay requirements. Moreover, it can flexibly respond to changes in the user’s moving speed and environment to ensure the stability of offload service.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Nanliang Shan ◽  
Yu Li ◽  
Xiaolong Cui

Mobile edge computing is a new computing paradigm that can extend cloud computing capabilities to the edge network, supporting computation-intensive applications such as face recognition, natural language processing, and augmented reality. Notably, computation offloading is a key technology of mobile edge computing to improve mobile devices’ performance and users’ experience by offloading local tasks to edge servers. In this paper, the problem of computation offloading under multiuser, multiserver, and multichannel scenarios is researched, and a computation offloading framework is proposed that considering the quality of service (QoS) of users, server resources, and channel interference. This framework consists of three levels. (1) In the offloading decision stage, the offloading decision is made based on the beneficial degree of computation offloading, which is measured by the total cost of the local computing of mobile devices in comparison with the edge-side server. (2) In the edge server selection stage, the candidate is comprehensively evaluated and selected by a multiobjective decision based on the Analytic Hierarchy Process based on Covariance (Cov-AHP) for computation offloading. (3) In the channel selection stage, a multiuser and multichannel distributed computation offloading strategy based on the potential game is proposed by considering the influence of channel interference on the user’s overall overhead. The corresponding multiuser and multichannel task scheduling algorithm is designed to maximize the overall benefit by finding the Nash equilibrium point of the potential game. Amounts of experimental results show that the proposed framework can greatly increase the number of beneficial computation offloading users and effectively reduce the energy consumption and time delay.


Sign in / Sign up

Export Citation Format

Share Document