Queec: QoE-aware Edge Computing for IoT Devices under Dynamic Workloads

2021 ◽  
Vol 17 (3) ◽  
pp. 1-23
Author(s):  
Borui Li ◽  
Wei Dong ◽  
Gaoyang Guan ◽  
Jiadong Zhang ◽  
Tao Gu ◽  
...  

Many IoT applications have the requirements of conducting complex IoT events processing (e.g., speech recognition) that are hardly supported by low-end IoT devices due to limited resources. Most existing approaches enable complex IoT event processing on low-end IoT devices by statically allocating tasks to the edge or the cloud. In this article, we present Queec, a QoE-aware edge computing system for complex IoT event processing under dynamic workloads. With Queec, the complex IoT event processing tasks that are relatively computation-intensive for low-end IoT devices can be transparently offloaded to nearby edge nodes at runtime. We formulate the problem of scheduling multi-user tasks to multiple edge nodes as an optimization problem, which minimizes the overall offloading latency of all tasks while avoiding the overloading problem. We implement Queec on low-end IoT devices, edge nodes, and the cloud. We conduct extensive evaluations, and the results show that Queec reduces 56.98% of the offloading latency on average compared with the state-of-the-art under dynamic workloads, while incurring acceptable overhead.

Author(s):  
Yong Xiao ◽  
Ling Wei ◽  
Junhao Feng ◽  
Wang En

Edge computing has emerged for meeting the ever-increasing computation demands from delay-sensitive Internet of Things (IoT) applications. However, the computing capability of an edge device, including a computing-enabled end user and an edge server, is insufficient to support massive amounts of tasks generated from IoT applications. In this paper, we aim to propose a two-tier end-edge collaborative computation offloading policy to support as much as possible computation-intensive tasks while making the edge computing system strongly stable. We formulate the two-tier end-edge collaborative offloading problem with the objective of minimizing the task processing and offloading cost constrained to the stability of queue lengths of end users and edge servers. We perform analysis of the Lyapunov drift-plus-penalty properties of the problem. Then, a cost-aware computation offloading (CACO) algorithm is proposed to find out optimal two-tier offloading decisions so as to minimize the cost while making the edge computing system stable. Our simulation results show that the proposed CACO outperforms the benchmarked algorithms, especially under various number of end users and edge servers.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4375 ◽  
Author(s):  
Yuxuan Wang ◽  
Jun Yang ◽  
Xiye Guo ◽  
Zhi Qu

As one of the information industry’s future development directions, the Internet of Things (IoT) has been widely used. In order to reduce the pressure on the network caused by the long distance between the processing platform and the terminal, edge computing provides a new paradigm for IoT applications. In many scenarios, the IoT devices are distributed in remote areas or extreme terrain and cannot be accessed directly through the terrestrial network, and data transmission can only be achieved via satellite. However, traditional satellites are highly customized, and on-board resources are designed for specific applications rather than universal computing. Therefore, we propose to transform the traditional satellite into a space edge computing node. It can dynamically load software in orbit, flexibly share on-board resources, and provide services coordinated with the cloud. The corresponding hardware structure and software architecture of the satellite is presented. Through the modeling analysis and simulation experiments of the application scenarios, the results show that the space edge computing system takes less time and consumes less energy than the traditional satellite constellation. The quality of service is mainly related to the number of satellites, satellite performance, and task offloading strategy.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 3047
Author(s):  
Kolade Olorunnife ◽  
Kevin Lee ◽  
Jonathan Kua

Recent years have seen the rapid adoption of Internet of Things (IoT) technologies, where billions of physical devices are interconnected to provide data sensing, computing and actuating capabilities. IoT-based systems have been extensively deployed across various sectors, such as smart homes, smart cities, smart transport, smart logistics and so forth. Newer paradigms such as edge computing are developed to facilitate computation and data intelligence to be performed closer to IoT devices, hence reducing latency for time-sensitive tasks. However, IoT applications are increasingly being deployed in remote and difficult to reach areas for edge computing scenarios. These deployment locations make upgrading application and dealing with software failures difficult. IoT applications are also increasingly being deployed as containers which offer increased remote management ability but are more complex to configure. This paper proposes an approach for effectively managing, updating and re-configuring container-based IoT software as efficiently, scalably and reliably as possible with minimal downtime upon the detection of software failures. The approach is evaluated using docker container-based IoT application deployments in an edge computing scenario.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 904 ◽  
Author(s):  
Adnan Sabovic ◽  
Carmen Delgado ◽  
Dragan Subotic ◽  
Bart Jooris ◽  
Eli De Poorter ◽  
...  

Billions of Internet of Things (IoT) devices rely on batteries as the main power source. These batteries are short-lived, bulky and harmful to the environment. Battery-less devices provide a promising alternative for a sustainable IoT, where energy harvested from the environment is stored in small capacitors. This constrained energy storage and the unpredictable energy harvested result in intermittent on–off behavior of the device. Measuring and understanding the current consumption and execution time of different tasks of IoT applications is crucial to properly operate these battery-less devices. In this paper, we study how to properly schedule sensing and transmission tasks on a battery-less LoRaWAN device. We analyze the trade-off between sleeping and allowing the device to turn off between the execution of application tasks. This study allows us to properly define the device configuration (i.e., capacitor size) based on the application tasks (i.e., sensing and sending) and environmental conditions (i.e., harvesting rate). We define an optimization problem that determines the optimal capacitor voltage at which the device should start performing its tasks. Our results show that a device using LoRaWAN Class A can measure the temperature and transmit its data at least once every 5 s if it can harvest at least 10 mA of current and uses a relatively small capacitor of 10 mF or less. At harvesting rates below 3 mA, it is necessary to turn off the device between application cycles and use a larger supercapacitor of at least 140 mF. In this case, the device can transmit a temperature measurement once every 60–100 s.


2019 ◽  
Vol 8 (3) ◽  
pp. 2356-2363

Nowadays, with the quick development of internet and cloud technologies, a big number of physical objects are linked to the Internet and every day, more objects are connected to the Internet. It provides great benefits that lead to a significant improvement in the quality of our daily life. Examples include: Smart City, Smart Homes, Autonomous Driving Cars or Airplanes and Health Monitoring Systems. On the other hand, Cloud Computing provides to the IoT systems a series of services such as data computing, processing or storage, analysis and securing. It is estimated that by the year 2025, approximately trillion IoT devices will be used. As a result, a huge amount of data is going to be generated. In addition, in order to efficiently and accurately work, there are situations where IoT applications (such as Self Driving, Health Monitoring, etc.) require quick responses. In this context, the traditional Cloud Computing systems will have difficulties in handling and providing services. To balance this scenario and to overcome the drawbacks of cloud computing, a new computing model called fog computing has proposed. In this paper, a comparison between fog computing and cloud computing paradigms were performed. The scheduling task for an IoT application in a cloud-fog computing system was considered. For the simulation and evaluation purposes, the CloudAnalyst simulation toolkit was used. The obtained numerical results showed the fog computing achieves better performance and works more efficient than Cloud computing. It also reduced the response time, processing time ,and cost of transfer data to the cloud.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 148
Author(s):  
Yassine Yazid ◽  
Imad Ez-Zazi ◽  
Antonio Guerrero-González ◽  
Ahmed El Oualkadi ◽  
Mounir Arioua

Unmanned aerial vehicles (UAVs) are becoming integrated into a wide range of modern IoT applications. The growing number of networked IoT devices generates a large amount of data. However, processing and memorizing this massive volume of data at local nodes have been deemed critical challenges, especially when using artificial intelligence (AI) systems to extract and exploit valuable information. In this context, mobile edge computing (MEC) has emerged as a way to bring cloud computing (CC) processes within reach of users, to address computation-intensive offloading and latency issues. This paper provides a comprehensive review of the most relevant research works related to UAV technology applications in terms of enabled or assisted MEC architectures. It details the utility of UAV-enabled MEC architecture regarding emerging IoT applications and the role of both deep learning (DL) and machine learning (ML) in meeting various limitations related to latency, task offloading, energy demand, and security. Furthermore, throughout this article, the reader gains an insight into the future of UAV-enabled MEC, the advantages and the critical challenges to be tackled when using AI.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 24
Author(s):  
Iván Froiz-Míguez ◽  
Paula Fraga-Lamas ◽  
Tiago M. Fernández-Caramés

The recent increase in the number of connected IoT devices, as well as the heterogeneity of the environments where they are deployed, has derived into the growth of the complexity of Machine-to-Machine (M2M) communication protocols and technologies. In addition, the hardware used by IoT devices has become more powerful and efficient. Such enhancements have made it possible to implement novel decentralized computing architectures like the ones based on edge computing, which offload part of the central server processing by using multiple distributed low-power nodes. In order to ease the deployment and synchronization of decentralized edge computing nodes, this paper describes an M2M distributed protocol based on Peer-to-Peer (P2P) communications that can be executed on low-power ARM devices. In addition, this paper proposes to make use of brokerless communications by using a distributed publication/subscription protocol. Thanks to the fact that information is stored in a distributed way among the nodes of the swarm and since each node can implement a specific access control system, the proposed system is able to make use of write access mechanisms and encryption for the stored data so that the rest of the nodes cannot access sensitive information. In order to test the feasibility of the proposed approach, a comparison with an Message-Queuing Telemetry Transport (MQTT) based architecture is performed in terms of latency, network consumption and performance.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6441 ◽  
Author(s):  
Salam Hamdan ◽  
Moussa Ayyash ◽  
Sufyan Almajali

The rapid growth of the Internet of Things (IoT) applications and their interference with our daily life tasks have led to a large number of IoT devices and enormous sizes of IoT-generated data. The resources of IoT devices are limited; therefore, the processing and storing IoT data in these devices are inefficient. Traditional cloud-computing resources are used to partially handle some of the IoT resource-limitation issues; however, using the resources in cloud centers leads to other issues, such as latency in time-critical IoT applications. Therefore, edge-cloud-computing technology has recently evolved. This technology allows for data processing and storage at the edge of the network. This paper studies, in-depth, edge-computing architectures for IoT (ECAs-IoT), and then classifies them according to different factors such as data placement, orchestration services, security, and big data. Besides, the paper studies each architecture in depth and compares them according to various features. Additionally, ECAs-IoT is mapped according to two existing IoT layered models, which helps in identifying the capabilities, features, and gaps of every architecture. Moreover, the paper presents the most important limitations of existing ECAs-IoT and recommends solutions to them. Furthermore, this survey details the IoT applications in the edge-computing domain. Lastly, the paper recommends four different scenarios for using ECAs-IoT by IoT applications.


Author(s):  
Xueqiang Yin ◽  
Athreya Tao Chen

Big data is one such technology. When we receive huge volume of data, there will be high demand in processing the huge data. It can also be said challenging task in big data processing. The increases in IoT devices in the network system collect more data to be processed in centralized devices called cloud storage. Every big data is processed and stored in the cloud. To overcome the performance and latency issues in large data computation, big cloud processing system uses edge computing in it. One of the key components of IoT is edge computing. We combine big data with cloud and edge computing in this paper as hybrid edge computing system. In the edge computing system, huge number of IoT devices computes services in its nearby network edge. Data sharing and transmission between the various service components may affect performance of the system. The main aim of this research article is to reduce the delay in data transfer between the components. This optimization goal is achieved by new Hybrid Meta-heuristic optimization (HMeO) algorithm. New HMeO algorithm designed for IoT devices to deploy the service components. MHO model is design to optimize the process by selecting the edge computing with minimum latency. Our proposed HMeO algorithm is compared with existing genetic algorithm and ant colony algorithm. The result shows HMeO algorithm provides more performance and efficient in in-depth data analysing and locating the component in big databased cloud environment.


Sign in / Sign up

Export Citation Format

Share Document