Empirical evaluation of power saving policies for data centers

2012 ◽  
Vol 40 (3) ◽  
pp. 18-22 ◽  
Author(s):  
Michele Mazzucco ◽  
Isi Mitrani
Author(s):  
Osvaldo Marra ◽  
Maria Mirto ◽  
Massimo Cafaro ◽  
Aloisio Giovanni

Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


Author(s):  
Piotr Arabas

The past years have brought about a great variety of clusters and clouds. This, combined with their increasing size and complexity, has resulted in an obvious need for power-saving control mechanisms. Upon presenting a basis on which such solutions - namely low-level power control interfaces, CPU governors and network topologies – are constructed, the paper summarizes network and cluster resources control algorithms. Finally, the need for integrated, hierarchical control is expressed, and specific examples are provided.


2020 ◽  
Vol 3 (2) ◽  
pp. 11-20
Author(s):  
Noora N. Bhaya ◽  
Rabah A. Ahmed

Cloud computing is a fast-growing technology used by major corporations these days because of the flexibility framework it provides to consumers. Cloud technology requires large data centers consisting of multiple IT equipment and servers. One main problem with these data centers is the vast amount of power consumed during servers operation. This reduces financial benefit and increases the need to produce more energy to cover the needs of operating the cloud infrastructure. This paper proposes an approach for managing the virtual central processing unit (vCPU) of a virtual machine to improve server power efficiency. A framework is used to study the proposed approach while processing different types of workloads widely found in most general-purpose cloud computing applications. Results indicate an improvement in server power saving.


Cloud computing, with its great potential in low cost and demanding services, is a good computing platform. Modern data centers for cloud computing are facing the difficulty of consistently increasing complexity because of the expanding quantity of clients and their enlarging resource demands. A great deal of efforts are currently focused on giving the cloud framework with autonomic behavior , so it can take decision about virtual machine (VM) management over the datacenter without intervention of human beings. Most of the self-organizing solutions results in eager migration, which attempts to diminish the amount of working servers virtual machines. These self-organizing resolution produce needless migration due to unpredictable workload. So also it consume huge amounts of electrical energy during unnecessary migration process. To overcome this issue, this project develop one novel VM migration scheme called eeadSelfCloud. The proposed schema is used to change the virtual machine in a cloud center that requires a lot of factors, such as basic requirements for resources during virtual machine setup, dynamic resource allocation, top software loading, software execution, and power saving at the Data Center. Data Center Utilization, Average Node Utilization, Request Rejection Ration, Number of Hop Count and Power Consumption are taken as constraint for measuring the proposed approach. The analysis report depicted that the proposed approach performs best than the other existing approaches.


2015 ◽  
pp. 266-288
Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing combines the advantages of several computing paradigms and introduces ubiquity in the provisioning of services such as software, platform, and infrastructure. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. This chapter presents a detailed survey of the existing mechanisms that aim at designing the Internet backbone with data centers and the objective of energy-efficient delivery of the cloud services. The survey is followed by a case study where Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics are used to guarantee either minimum delayed or maximum power saving cloud services where high performance data centers are assumed to be located at the core nodes of an IP-over-WDM network. The chapter is concluded by summarizing the surveyed schemes with a taxonomy including the cons and pros. The summary is followed by a discussion focusing on the research challenges and opportunities.


2014 ◽  
Vol 50 (7) ◽  
pp. 518-527 ◽  
Author(s):  
Masayuki NAKAMURA ◽  
Hideaki HASHIMOTO ◽  
Ryota NAKAMURA ◽  
Joji URATA
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document