scholarly journals Study QoS Optimization and Energy Saving Techniques in Cloud, Fog, Edge, and IoT

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-16 ◽  
Author(s):  
Zhiguo Qu ◽  
Yilin Wang ◽  
Le Sun ◽  
Dandan Peng ◽  
Zheng Li

With an increase of service users’ demands on high quality of services (QoS), more and more efficient service computing models are proposed. The development of cloud computing, fog computing, and edge computing brings a number of challenges, e.g., QoS optimization and energy saving. We do a comprehensive survey on QoS optimization and energy saving in cloud computing, fog computing, edge computing, and IoT environments. We summarize the main challenges and analyze corresponding solutions proposed by existing works. This survey aims to help readers have a deeper understanding on the concepts of different computing models and study the techniques of QoS optimization and energy saving in these models.

2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Kai Peng ◽  
Victor C. M. Leung ◽  
Xiaolong Xu ◽  
Lixin Zheng ◽  
Jiabin Wang ◽  
...  

Mobile cloud computing (MCC) integrates cloud computing (CC) into mobile networks, prolonging the battery life of the mobile users (MUs). However, this mode may cause significant execution delay. To address the delay issue, a new mode known as mobile edge computing (MEC) has been proposed. MEC provides computing and storage service for the edge of network, which enables MUs to execute applications efficiently and meet the delay requirements. In this paper, we present a comprehensive survey of the MEC research from the perspective of service adoption and provision. We first describe the overview of MEC, including the definition, architecture, and service of MEC. After that we review the existing MUs-oriented service adoption of MEC, i.e., offloading. More specifically, the study on offloading is divided into two key taxonomies: computation offloading and data offloading. In addition, each of them is further divided into single MU offloading scheme and multi-MU offloading scheme. Then we survey edge server- (ES-) oriented service provision, including technical indicators, ES placement, and resource allocation. In addition, other issues like applications on MEC and open issues are investigated. Finally, we conclude the paper.


Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2783 ◽  
Author(s):  
Kun Ma ◽  
Antoine Bagula ◽  
Clement Nyirenda ◽  
Olasupo Ajayi

The internet of things (IoT) and cloud computing are two technologies which have recently changed both the academia and industry and impacted our daily lives in different ways. However, despite their impact, both technologies have their shortcomings. Though being cheap and convenient, cloud services consume a huge amount of network bandwidth. Furthermore, the physical distance between data source(s) and the data centre makes delays a frequent problem in cloud computing infrastructures. Fog computing has been proposed as a distributed service computing model that provides a solution to these limitations. It is based on a para-virtualized architecture that fully utilizes the computing functions of terminal devices and the advantages of local proximity processing. This paper proposes a multi-layer IoT-based fog computing model called IoT-FCM, which uses a genetic algorithm for resource allocation between the terminal layer and fog layer and a multi-sink version of the least interference beaconing protocol (LIBP) called least interference multi-sink protocol (LIMP) to enhance the fault-tolerance/robustness and reduce energy consumption of a terminal layer. Simulation results show that compared to the popular max–min and fog-oriented max–min, IoT-FCM performs better by reducing the distance between terminals and fog nodes by at least 38% and reducing energy consumed by an average of 150 KWh while being at par with the other algorithms in terms of delay for high number of tasks.


Author(s):  
Ahmed El-Yahyaoui ◽  
Mohamed Daifr Ech-Cherif El Kettani

Fully homomorphic encryption schemes (FHE) are a type of encryption algorithm dedicated to data security in cloud computing. It allows for performing computations over ciphertext. In addition to this characteristic, a verifiable FHE scheme has the capacity to allow an end user to verify the correctness of the computations done by a cloud server on his encrypted data. Since FHE schemes are known to be greedy in term of processing consumption and slow in terms of runtime execution, it is very useful to look for improvement techniques and tools to improve FHE performance. Parallelizing computations is among the best tools one can use for FHE improvement. Batching is a kind of parallelization of computations when applied to an FHE scheme, it gives it the capacity of encrypting and homomorphically processing a vector of plaintexts as a single ciphertext. This is used in the context of cloud computing to perform a known function on several ciphertexts for multiple clients at the same time. The advantage here is in optimizing resources on the cloud side and improving the quality of services provided by the cloud computing. In this article, the authors will present a detailed survey of different FHE improvement techniques in the literature and apply the batching technique to a promising verifiable FHE (VFHE) recently presented by the authors at the WINCOM17 conference.


The introduction of cloud computing has revolutionized business and technology. Cloud computing has merged technology and business creating an almost indistinguishable framework. Cloud computing has utilized various techniques that have been vital in reshaping the way computers are used in business, IT, and education. Cloud computing has replaced the distributed system of using computing resources to a centralized system where resources are easily shared between user and organizations located in different geographical locations. Traditionally the resources are usually stored and managed by a third-party, but the process is usually transparent to the user. The new technology led to the introduction of various user needs such as to search the cloud and associated databases. The development of a selection system used to search the cloud such as in the case of ELECTRE IS and Skyline; this research will develop a system that will be used to manage and determine the quality of service constraints of these new systems with regards to networked cloud computing. The method applied will mimic the various selection system in JAVA and evaluate the Quality of service for multiple cloud services. The FogTorch search tool will be used for quality service management of three cloud services.


There are a huge number of nodes connected to web computing to offer various types of web services to provide cloud clients. Limited numbers of nodes connected to cloud computing have to execute more than a thousand or a million tasks at the same time. So it is not so simple to execute all tasks at the same particular time. Some nodes execute all tasks, so there is a need to balance all the tasks or loads at a time. Load balance minimizes the completion time and executes all the tasks in a particular way.There is no possibility to keep an equal number of servers in cloud computing to execute an equal number of tasks. Tasks that are to be performed in cloud computing would be more than the connected servers. Limited servers have to perform a great number of tasks.We propose a task scheduling algorithm where few nodes perform the jobs, where jobs are more than the nodes and balance all loads to the available nodes to make the best use of the quality of services with load balancing.


Author(s):  
Osvaldo Adilson De Carvalho Junior ◽  
Sarita Mazzini Bruschi ◽  
Regina Helena Carlucci Santana ◽  
Marcos José Santana

The aim of this paper is to propose and evaluate GreenMACC (Green Metascheduler Architecture to Provide QoS in Cloud Computing), an extension of the MACC architecture (Metascheduler Architecture to provide QoS in Cloud Computing) which uses greenIT techniques to provide Quality of Service. The paper provides an evaluation of the performance of the policies in the four stages of scheduling focused on energy consumption and average response time. The results presented confirm the consistency of the proposal as it controls energy consumption and the quality of services requested by different users of a large-scale private cloud.


2019 ◽  
Vol 2 (2) ◽  
pp. 13-43
Author(s):  
Ashish Tiwari ◽  
Rajeev Mohan Sharma

Fog Computing provides resources as a service. Various providers are providing the best form of Quality of Services (QoS) which works in the principal of pay per use. Now it is important to connect the Internet of Things (IoT) services in fog computing. The strategy for choosing a service provider is assessed by which cloud provider provides what.


Author(s):  
V. Goswami ◽  
S. S. Patra ◽  
G. B. Mund

In Cloud Computing, the virtualization of IT infrastructure enables consolidation and pooling of IT resources so they are shared over diverse applications to offset the limitation of shrinking resources and growing business needs. Cloud Computing is a way to increase the capacity or add capabilities dynamically without investing in new infrastructure, training new personnel, or licensing new software. It extends Information Technology's existing capabilities. In the last few years, cloud computing has grown from being a promising business concept to one of the fast growing segments of the IT industry. For the commercial success of this new computing paradigm, the ability to deliver guaranteed Quality of Services is crucial. Based on the Service Level Agreement, the requests are processed in the cloud centers in different modes. This chapter deals with Quality of Services and optimal management of cloud centers with different arrival modes. For this purpose, the authors consider a finite-buffer multi-server queuing system where client requests have different arrival modes. It is assumed that each arrival mode is serviced by one or more virtual machines, and different modes have equal probabilities of receiving services. Various performance measures are obtained and optimal cost policy is presented with numerical results. A genetic algorithm is employed to search optimal values of various parameters for the system.


2021 ◽  
Author(s):  
Ethar H. K. Alkamil ◽  
Ammar A. Mutlag ◽  
Haider W. Alsaffar ◽  
Mustafa H. Sabah

Abstract Recently, the oil and gas industry faced several crucial challenges affecting the global energy market, including the Covid-19 outbreak, fluctuations in oil prices with considerable uncertainty, dramatically increased environmental regulations, and digital cybersecurity challenges. Therefore, the industrial internet of things (IIoT) may provide needed hybrid cloud and fog computing to analyze huge amounts of sensitive data from sensors and actuators to monitor oil rigs and wells closely, thereby better controlling global oil production. Improved quality of service (QoS) is possible with the fog computing, since it can alleviate challenges that a standard isolated cloud can't handle, an extended cloud located near underlying nodes is being developed. The paradigm of cloud computing is not sufficient to meet the needs of the already extensively utilized IIoT (i.e., edge) applications (e.g., low latency and jitter, context awareness, and mobility support) for a variety of reasons (e.g., health care and sensor networks). Couple of paradigms just like mobile edge computing, fog computing, and mobile cloud computing, have arisen in recently to meet these criteria. Fog computing helps to optimize services and create better user experiences, such as faster responses for critical, time-sensitive needs. At the same time, it also invites problems, such as overload, underload, and disparity in resource usage, including latency, time responses, throughput, etc. The comprehensive review presented in this work shows that fog devices have highly constrained environments and limited hardware capabilities. The existing cloud computing infrastructure is not capable of processing all data in a centralized manner because of the network bandwidth costs and response latency requirements. Therefore, fog computing demonstrated, instead of edge computing, and referred to as "the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IIoT services" (Shi et al., 2016) is more effective for data processing when data sources are close together. A review of fog and cloud computing literature suggests that fog is better than cloud computing because fog computing performs time-dependent computations better than cloud computing. The cloud is inefficient for latency-sensitive multimedia services and other time-sensitive applications since it is accessible over the internet, like the real-time monitoring, automation, and optimization of petroleum industry operations. As a result, a growing number of IIoT projects are dispersing fog computing capacity throughout the edge network as well as through data centers and the public cloud. A comprehensive review of fog computing features is presented here, with the potential of using it in the petroleum industry. Fog computing can provide a rapid response for applications through preprocess and filter data. Data that has been trimmed can then be transmitted to the cloud for additional analysis and better service delivery.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaoying Wang ◽  
Xiaojing Liu ◽  
Lihua Fan ◽  
Xuhan Jia

As cloud computing offers services to lots of users worldwide, pervasive applications from customers are hosted by large-scale data centers. Upon such platforms, virtualization technology is employed to multiplex the underlying physical resources. Since the incoming loads of different application vary significantly, it is important and critical to manage the placement and resource allocation schemes of the virtual machines (VMs) in order to guarantee the quality of services. In this paper, we propose a decentralized virtual machine migration approach inside the data centers for cloud computing environments. The system models and power models are defined and described first. Then, we present the key steps of the decentralized mechanism, including the establishment of load vectors, load information collection, VM selection, and destination determination. A two-threshold decentralized migration algorithm is implemented to further save the energy consumption as well as keeping the quality of services. By examining the effect of our approach by performance evaluation experiments, the thresholds and other factors are analyzed and discussed. The results illustrate that the proposed approach can efficiently balance the loads across different physical nodes and also can lead to less power consumption of the entire system holistically.


Sign in / Sign up

Export Citation Format

Share Document