scholarly journals Optimal Service Provisioning for the Scalable Fog/Edge Computing Environment

Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1506 ◽  
Author(s):  
Jonghwa Choi ◽  
Sanghyun Ahn

In recent years, we observed the proliferation of cloud data centers (CDCs) and the Internet of Things (IoT). Cloud computing based on CDCs has the drawback of unpredictable response times due to variant delays between service requestors (IoT devices and end devices) and CDCs. This deficiency of cloud computing is especially problematic in providing IoT services with strict timing requirements and as a result, gives birth to fog/edge computing (FEC) whose responsiveness is achieved by placing service images near service requestors. In FEC, the computing nodes located close to service requestors are called fog/edge nodes (FENs). In addition, for an FEN to execute a specific service, it has to be provisioned with the corresponding service image. Most of the previous work on the service provisioning in the FEC environment deals with determining an appropriate FEN satisfying the requirements like delay, CPU and storage from the perspective of one or more service requests. In this paper, we determined how to optimally place service images in consideration of the pre-obtained service demands which may be collected during the prior time interval. The proposed FEC environment is scalable in the sense that the resources of FENs are effectively utilized thanks to the optimal provisioning of services on FENs. We propose two approaches to provision service images on FENs. In order to validate the performance of the proposed mechanisms, intensive simulations were carried out for various service demand scenarios.

2014 ◽  
Vol 1008-1009 ◽  
pp. 1513-1516
Author(s):  
Hai Na Song ◽  
Xiao Qing Zhang ◽  
Zhong Tang He

Cloud computing environment is regarded as a kind of multi-tenant computing mode. With virtulization as a support technology, cloud computing realizes the integration of multiple workloads in one server through the package and seperation of virtual machines. Aiming at the contradiction between the heterogeneous applications and uniform shared resource pool, using the idea of bin packing, the multidimensional resource scheduling problem is analyzed in this paper. We carry out some example analysis in one-dimensional resource scheduling, two-dimensional resource schduling and three-dimensional resource scheduling. The results shows that the resource utilization of cloud data centers will be improved greatly when the resource sheduling is conducted after reorganizing rationally the heterogeneous demands.


2020 ◽  
Vol 8 (6) ◽  
pp. 4590-4596

Monitoring high throughput distributed system by using a statistical analysis of the “historical time series” of an Instrumentation Data”. “The Pipeline has been made to process the information which can be otherwise called data pipeline, is a lot of information handling components associated in arrangement, where yield of one component is the contribution of the next one”. Several codes are giving different visualization for statistical analysis of data. “Network and Cloud Data Centers” generate a lot of data every second; this data can be gathered as period arrangement information. A timeseries is a grouping taken at progressive similarly dispersed focuses in time that implies at a particular time interval to a particular time, the estimations of explicit information that was taken is known as information of a time-series. “This time-series information can be gathered utilizing framework measurements like CPU, Memory, and Disk utilization”. The TICK and ELK Stack is abbreviation for a foundation of open source instruments worked “to make collection, storage, graphing, and alerting” on time arrangement data incredibly easy. As an information collector, using Telegraf, “for storing and analyzing” information and the time-series database InfluxDB and Elasticsearch. For plotting and visualizing used Grafana and Kibana. Watchman is utilized for alert refinement and once system metrics usage exceeds the specified threshold, the alert is generated and sends it to the Telegram.


2019 ◽  
pp. 446-458
Author(s):  
Arun Fera M. ◽  
M. Saravanapriya ◽  
J. John Shiny

Cloud computing is one of the most vital technology which becomes part and parcel of corporate life. It is considered to be one of the most emerging technology which serves for various applications. Generally these Cloud computing systems provide a various data storage services which highly reduces the complexity of users. we mainly focus on addressing in providing confidentiality to users' data. We are proposing one mechanism for addressing this issue. Since software level security has vulnerabilities in addressing the solution to our problem we are dealing with providing hardware level of security. We are focusing on Trusted Platform Module (TPM) which is a chip in computer that is used for secure storage that is mainly used to deal with authentication problem. TPM which when used provides a trustworthy environment to the users. A detailed survey on various existing TPM related security and its implementations is carried out in our research work.


2019 ◽  
Vol 9 (17) ◽  
pp. 3550 ◽  
Author(s):  
A-Young Son ◽  
Eui-Nam Huh

With the rapid increase in the development of the cloud data centers, it is expected that massive data will be generated, which will decrease service response time for the cloud data centers. To improve the service response time, distributed cloud computing has been designed and researched for placement and migration from mobile devices close to edge servers that have secure resource computing. However, most of the related studies did not provide sufficient service efficiency for multi-objective factors such as energy efficiency, resource efficiency, and performance improvement. In addition, most of the existing approaches did not consider various metrics. Thus, to maximize energy efficiency, maximize performance, and reduce costs, we consider multi-metric factors by combining decision methods, according to user requirements. In order to satisfy the user’s requirements based on service, we propose an efficient service placement system named fuzzy- analytical hierarchical process and then analyze the metric that enables the decision and selection of a machine in the distributed cloud environment. Lastly, using different placement schemes, we demonstrate the performance of the proposed scheme.


2017 ◽  
Vol 10 (13) ◽  
pp. 162
Author(s):  
Amey Rivankar ◽  
Anusooya G

Cloud computing is the latest trend in large-scale distributed computing. It provides diverse services on demand to distributive resources such asservers, software, and databases. One of the challenging problems in cloud data centers is to manage the load of different reconfigurable virtual machines over one another. Thus, in the near future of cloud computing field, providing a mechanism for efficient resource management will be very significant. Many load balancing algorithms have been already implemented and executed to manage the resources efficiently and adequately. The objective of this paper is to analyze shortcomings of existing algorithms and implement a new algorithm which will give optimized load balancingresult.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3071 ◽  
Author(s):  
Jun-Hong Park ◽  
Hyeong-Su Kim ◽  
Won-Tae Kim

Edge computing is proposed to solve the problem of centralized cloud computing caused by a large number of IoT (Internet of Things) devices. The IoT protocols need to be modified according to the edge computing paradigm, where the edge computing devices for analyzing IoT data are distributed to the edge networks. The MQTT (Message Queuing Telemetry Transport) protocol, as a data distribution protocol widely adopted in many international IoT standards, is suitable for cloud computing because it uses a centralized broker to effectively collect and transmit data. However, the standard MQTT may suffer from serious traffic congestion problem on the broker, causing long transfer delays if there are massive IoT devices connected to the broker. In addition, the big data exchange between the IoT devices and the broker decreases network capability of the edge networks. The authors in this paper propose a novel MQTT with a multicast mechanism to minimize data transfer delay and network usage for the massive IoT communications. The proposed MQTT reduces data transfer delays by establishing bidirectional SDN (Software Defined Networking) multicast trees between the publishers and the subscribers by means of bypassing the centralized broker. As a result, it can reduce transmission delay by 65% and network usage by 58% compared with the standard MQTT.


2019 ◽  
Vol 16 (9) ◽  
pp. 3989-3994
Author(s):  
Jaspreet Singh ◽  
Deepali Gupta ◽  
Neha Sharma

Nowadays, Cloud computing is developing quickly and customers are requesting more administrations and superior outcomes. In the cloud domain, load balancing has turned into an extremely intriguing and crucial research area. Numbers of algorithms were recommended to give proficient mechanism for distributing the cloud user’s requests for accessing pool cloud resources. Also load balancing in cloud should provide notable functional benefits to cloud users and at the same time should prove out to be eminent for cloud services providers. In this paper, the pre-existing load balancing techniques are explored. The paper intends to provide landscape for classification of distinct load balancing algorithms based upon the several parameters and also address performance assessment bound to various load balancing algorithms. The comparative assessment of various load balancing algorithms will helps in proposing a competent load balancing technique for intensify the performance of cloud data centers.


2019 ◽  
Author(s):  
Girish L

Network and Cloud Data Centers generate a lot of data every second, this data can be collected as a time series data. A time series is a sequence taken at successive equally spaced points in time, that means at a particular time interval to a specific time, the values of specific data that was taken is known as a data of a time series. This time series data can be collected using system metrics like CPU, Memory, and Disk utilization. The TICK Stack is an acronym for a platform of open source tools built to make collection, storage, graphing, and alerting on time series data incredibly easy. As a data collector, the authors are using both Telegraf and Collectd, for storing and analyzing data and the time series database InfluxDB. For plotting and visualizing, they use Chronograf along with Grafana. Kapacitor is used for alert refinement and once system metrics usage exceeds the specified threshold, the alert is generated and sends it to the system admin.


Author(s):  
Abdullah Fadil ◽  
Waskitho Wibisono

Komputasi awan atau cloud computing merupakan lingkungan yang heterogen dan terdistribusi, tersusun atas gugusan jaringan server dengan berbagai kapasitas sumber daya komputasi yang berbeda-beda guna menopang model layanan yang ada di atasnya. Virtual machine (VM) dijadikan sebagai representasi dari ketersediaan sumber daya komputasi dinamis yang dapat dialokasikan dan direalokasikan sesuai dengan permintaan. Mekanisme live migration VM di antara server fisik yang terdapat di dalam data center cloud digunakan untuk mencapai konsolidasi dan memaksimalkan utilisasi VM. Pada prosedur konsoidasi vm, pemilihan dan penempatan VM sering kali menggunakan kriteria tunggal dan statis. Dalam penelitian ini diusulkan pemilihan dan penempatan VM menggunakan multi-criteria decision making (MCDM) pada prosedur konsolidasi VM dinamis di lingkungan cloud data center guna meningkatkan layanan cloud computing. Pendekatan praktis digunakan dalam mengembangkan lingkungan cloud computing berbasis OpenStack Cloud dengan mengintegrasikan VM selection dan VM Placement pada prosedur konsolidasi VM menggunakan OpenStack-Neat. Hasil penelitian menunjukkan bahwa metode pemilihan dan penempatan VM melalui live migration mampu menggantikan kerugian yang disebabkan oleh down-times sebesar 11,994 detik dari waktu responnya. Peningkatan response times terjadi sebesar 6 ms ketika terjadi proses live migration VM dari host asal ke host tujuan. Response times rata-rata setiap vm yang tersebar pada compute node setelah terjadi proses live migration sebesar 67 ms yang menunjukkan keseimbangan beban pada sistem cloud computing.


Author(s):  
Deepika T. ◽  
Prakash P.

The flourishing development of the cloud computing paradigm provides several services in the industrial business world. Power consumption by cloud data centers is one of the crucial issues for service providers in the domain of cloud computing. Pursuant to the rapid technology enhancements in cloud environments and data centers augmentations, power utilization in data centers is expected to grow unabated. A diverse set of numerous connected devices, engaged with the ubiquitous cloud, results in unprecedented power utilization by the data centers, accompanied by increased carbon footprints. Nearly a million physical machines (PM) are running all over the data centers, along with (5 – 6) million virtual machines (VM). In the next five years, the power needs of this domain are expected to spiral up to 5% of global power production. The virtual machine power consumption reduction impacts the diminishing of the PM’s power, however further changing in power consumption of data center year by year, to aid the cloud vendors using prediction methods. The sudden fluctuation in power utilization will cause power outage in the cloud data centers. This paper aims to forecast the VM power consumption with the help of regressive predictive analysis, one of the Machine Learning (ML) techniques. The potency of this approach to make better predictions of future value, using Multi-layer Perceptron (MLP) regressor which provides 91% of accuracy during the prediction process.


Sign in / Sign up

Export Citation Format

Share Document