scholarly journals A Novel Cost Based Model for Energy Consumption in Cloud Computing

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
A. Horri ◽  
Gh. Dastghaibyfard

Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

2014 ◽  
Vol 986-987 ◽  
pp. 1383-1386
Author(s):  
Zhen Xing Yang ◽  
He Guo ◽  
Yu Long Yu ◽  
Yu Xin Wang

Cloud computing is a new emerging paradigm which delivers an infrastructure, platform and software as services in a pay-as-you-go model. However, with the development of cloud computing, the large-scale data centers consume huge amounts of electrical energy resulting in high operational costs and environment problem. Nevertheless, existing energy-saving algorithms based on live migration don’t consider the migration energy consumption, and most of which are designed for homogeneous cloud environment. In this paper, we take the first step to model energy consumption in heterogeneous cloud environment with migration energy consumption. Based on this energy model, we design energy-saving Best fit decreasing (ESBFD) algorithm and energy-saving first fit decreasing (ESFFD) algorithm. We further provide results of several experiments using traces from PlanetLab in CloudSim. The experiments show that the proposed algorithms can effectively reduce the energy consumption of data center in the heterogeneous cloud environment compared to existing algorithms like NEA, DVFS, ST (Single Threshold) and DT (Double Threshold).


Author(s):  
Ritu Garg ◽  
Neha Shukla

Cloud computing makes utility computing possible with pay as you go model. It virtualizes the systems by polling and sharing the resources, thus we need to handle more than one workflow at the same time. Workflow is the standard to represent compute intensive applications in scientific and engineering domain. Hence, in this article, the authors presented the scheduling heuristic for multiple workflows running parallel in the cloud environment with the aim to reduce the energy consumption as it is one of the major concerns of cloud data centers along with the execution performance. In the proposed approach, first clustering is performed to minimize the energy consumption and execution time during communication corresponding to precedence constraint tasks. Then cluster are scheduled is on the best available energy efficient resources. Finally, DVFS is applied in order to reduce energy consumption further when the nodes are in the idle and communication stage. The simulation has been performed on CloudSim and the results show the reduction in energy consumption by up to 42%.


Cloud computing is to compute a task assigned to a set of connections, software and services that can be utilized by the user over a network. The trending need of Cloud infrastructure has drastically scale up the energy need of data centers, which has become a critical issue. In the row also lead to high carbon emission which is not environment friendly so there is a need of energy efficient approach in cloud computing The research paper aims to reach a theoretical notion of sustainable development with proposing an incentive for reducing global warming through effective clustering techniques and methods. This paper aims to reduce cloud events by applying map reduce on large event clusters formed in cloud. The purpose of the paper is to develop a better methodology for handling the events of cloud computing and possibly clustering and reducing the similar types of events. This approach might lead to the reduction of carbon-dioxide gas (which is a greenhouse gas) by less usage of servers in cloud data centers. With the advent of IT services in cloud computing energy consumption it is necessary for the developing technology to progress towards sustainable development rather thrashing and harnessing energy from every possible means.


2021 ◽  
Vol 11 (13) ◽  
pp. 5849
Author(s):  
Nimra Malik ◽  
Muhammad Sardaraz ◽  
Muhammad Tahir ◽  
Babar Shah ◽  
Gohar Ali ◽  
...  

Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing.


2017 ◽  
Vol 16 (6) ◽  
pp. 6953-6961
Author(s):  
Kavita Redishettywar ◽  
Prof. Rafik Juber Thekiya

Cloud computing is a vigorous technology by which a user can get software, application, operating system and hardware as a service without actually possessing it and paying only according to the usage. Cloud Computing is a hot topic of research for the researchers these days. With the rapid growth of Interne technology cloud computing have become main source of computing for small as well big IT companies. In the cloud computing milieu the cloud data centers and the users of the cloud-computing are globally situated, therefore it is a big challenge for cloud data centers to efficiently handle the requests which are coming from millions of users and service them in an efficient manner. Load balancing ensures that no single node will be overloaded and used to distribute workload among multiple nodes. It helps to improve system performance and proper utilization of resources. We propose an improved load balancing algorithm for job scheduling in the cloud environment using K-Means clustering of cloudlets and virtual machines in the cloud environment. All the cloudlets given by the user are divided into 3 clusters depending upon client’s priority, cost and instruction length of the cloudlet. The virtual machines inside the datacenter hosts are also grouped into multiple clusters depending upon virtual machine capacity in terms of processor, memory, and bandwidth. Sorting is applied at both the ends to reduce the latency. Multiple number of experiments have been conducted by taking different configurations of cloudlets and virtual machine. Various parameters like waiting time, execution time, turnaround time and the usage cost have been computed inside the cloudsim environment to demonstrate the results. Compared with the other job scheduling algorithms, the improved load balancing algorithm can outperform them according to the experimental results.


2017 ◽  
Vol 5 (4RACSIT) ◽  
pp. 63-68
Author(s):  
Dinesh Raj Paneru ◽  
Madhu B. R. ◽  
Santosh Naik

Services such as Platform as a Service (PaaS), Infrastructure as a Service (IaaS) and Software as a Service (SaaS) are provided by Cloud Computing. Subscription based computing resources and storage is offered in cloud. Cloud Computing is boosted by Virtualization technology. To move running applications or VMs starting with one physical machine then onto the next, while the customer is associated is named as Live VM migration. VM migration is empowered by means of Virtualization innovation to adjust stack in the server farms. Movement is done fundamentally to deal with the assets progressively. Server Consolidation’s main goal is to expel the issue of server sprawl. It tries to pack VMs from daintily stacked host on to fewer machines to satisfy assets needs. On other hand Load balancing helps in distributing workloads across multiple computing resources. Also in the presence of low loaded machines it avoids machines from getting overloaded and maintains efficiency. To balance the load across the systems in various cases, live migration technique is used with the application of various algorithms. The movement of virtual machines from completely stacked physical machines to low stacked physical machines is the instrument to adjust the entire framework stack. When we are worried about the energy consumption in Cloud Computing, VM consolidation & Server Consolidation comes into scenario in Virtual Machine movement method which itself implies that there is low energy consumption.


2019 ◽  
Vol 20 (2) ◽  
pp. 399-432 ◽  
Author(s):  
Parminder Singh ◽  
Pooja Gupta ◽  
Kiran Jyoti ◽  
Anand Nayyar

Cloud computing emerging environment attracts many applications providers to deploy web applications on cloud data centers. The primary area of attraction is elasticity, which allows to auto-scale the resources on-demand. However, web applications usually have dynamic workload and hard to predict. Cloud service providers and researchers are working to reduce the cost while maintaining the Quality of Service (QoS). One of the key challenges for web application in cloud computing is auto-scaling. The auto-scaling in cloud computing is still in infancy and required detail investigation of taxonomy, approach and types of resources mapped to the current research. In this article, we presented the literature survey for auto-scaling techniques of web applications in cloud computing. This survey supports the research community to find the requirements in auto-scaling techniques. We present a taxonomy of reviewed articles with parameters such as auto-scaling techniques, approach, resources, monitoring tool, experiment, workload, and metric, etc. Based on the analysis, we proposed the new areas of research in this direction.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 550 ◽  
Author(s):  
G Anusha ◽  
P Supraja

Cloud computing is a growing technology now-a-days, which provides various resources to perform complex tasks. These complex tasks can be performed with the help of datacenters. Data centers helps the incoming tasks by providing various resources like CPU, storage, network, bandwidth and memory, which has resulted in the increase of the total number of datacenters in the world. These data centers consume large volume of energy for performing the operations and which leads to high operation costs. Resources are the key cause for the power consumption in data centers along with the air and cooling systems. Energy consumption in data centers is comparative to the resource usage. Excessive amount of energy consumption by datacenters falls out in large power bills. There is a necessity to increase the energy efficiency of such data centers. We have proposed an Energy aware dynamic virtual machine consolidation (EADVMC) model which focuses on pm selection, vm selection, vm placement phases, which results in the reduced energy consumption and the Quality of service (QoS) to a considerable level.


Sign in / Sign up

Export Citation Format

Share Document