scholarly journals Deployment of a Hybrid Multicast Switch in Energy-Aware Data Center Network: A Case of Fat-Tree Topology

2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
Tosmate Cheocherngngarn ◽  
Jean Andrian ◽  
Deng Pan

Recently, energy efficiency or green IT has become a hot issue for many IT infrastructures as they attempt to utilize energy-efficient strategies in their enterprise IT systems in order to minimize operational costs. Networking devices are shared resources connecting important IT infrastructures, especially in a data center network they are always operated 24/7 which consume a huge amount of energy, and it has been obviously shown that this energy consumption is largely independent of the traffic through the devices. As a result, power consumption in networking devices is becoming more and more a critical problem, which is of interest for both research community and general public. Multicast benefits group communications in saving link bandwidth and improving application throughput, both of which are important for green data center. In this paper, we study the deployment strategy of multicast switches in hybrid mode in energy-aware data center network: a case of famous fat-tree topology. The objective is to find the best location to deploy multicast switch not only to achieve optimal bandwidth utilization but also to minimize power consumption. We show that it is possible to easily achieve nearly 50% of energy consumption after applying our proposed algorithm.

Author(s):  
Burak Kantarci ◽  
Hussein T. Mouftah

Cloud computing aims to migrate IT services to distant data centers in order to reduce the dependency of the services on the limited local resources. Cloud computing provides access to distant computing resources via Web services while the end user is not aware of how the IT infrastructure is managed. Besides the novelties and advantages of cloud computing, deployment of a large number of servers and data centers introduces the challenge of high energy consumption. Additionally, transportation of IT services over the Internet backbone accumulates the energy consumption problem of the backbone infrastructure. In this chapter, the authors cover energy-efficient cloud computing studies in the data center involving various aspects such as: reduction of processing, storage, and data center network-related power consumption. They first provide a brief overview of the existing approaches on cool data centers that can be mainly grouped as studies on virtualization techniques, energy-efficient data center network design schemes, and studies that monitor the data center thermal activity by Wireless Sensor Networks (WSNs). The authors also present solutions that aim to reduce energy consumption in data centers by considering the communications aspects over the backbone of large-scale cloud systems.


Author(s):  
Muhammad Ishaq ◽  
Mohammad Kaleem ◽  
Numan Kifayat

This chapter briefly introduces the data center network and reviews the challenges for future intra-data-center networks in terms of scalability, cost effectiveness, power efficiency, upgrade cost, and bandwidth utilization. Current data center network architecture is discussed in detail and the drawbacks are pointed out in terms of the above-mentioned parameters. A detailed background is provided that how the technology moved from opaque to transparent optical networks. Additionally, it includes different data center network architectures proposed so far by different researchers/team/companies in order to address the current problems and meet the demands of future intra-data-center networks.


2013 ◽  
Vol 325-326 ◽  
pp. 1730-1733 ◽  
Author(s):  
Si Yuan Jing ◽  
Shahzad Ali ◽  
Kun She

Numerous part of the energy-aware resource provision research for cloud data center just considers how to maximize the resource utilization, i.e. minimize the required servers, without considering the overhead of a virtual machine (abbreviated as a VM) placement change. In this work, we propose a new method to minimize the energy consumption and VM placement change at the same time, moreover we also design a network-flow-theory based approximate algorithm to solve it. The simulation results show that, compared to existing work, the proposed method can slightly decrease the energy consumption but greatly decrease the number of VM placement change


2018 ◽  
Vol 7 (3) ◽  
pp. 1656
Author(s):  
Ramesh Pasupuleti ◽  
Ramachandraiah Uppu

As per Moore’s law, the power consumption and heat solidity of the multiprocessor systems are increasing proportionately. High temperature increases the leakage power consumption of the processor and thus probably escort to thermal runaway. Efficiently managing the energy consumption of the multiprocessor systems in order to increase the battery lifetime is a major challenge in multiprocessor platforms. This article presents Thermal Energy aware proportionate scheduler (TEAPS) to reduce leakage power consumption. Simulation experiment illustrate that TEAPS reduces 16% of energy consumption with respect to Mixed Proportionate Fair (PFAIR-M) and 36% of energy consumption with respect to Proportionate Fair (PFAIR) Schedulers on the system consisting of 20 processors under full load condition.  


2015 ◽  
Vol 4 (1) ◽  
pp. 78
Author(s):  
Cristian Tudoran ◽  
Stefan Albert ◽  
Dorin N. Dadarlat ◽  
Carmen Tripon ◽  
Sorin Dan Anghel

Improving the energy efficiency of our Institute’s data center is an ambitious challenge for our research teams. Understanding how the energy is consumed in each segment of the system becomes fundamental in order to minimize the overall energy consumed by the system itself. In this paper, we propose an experimentally–driven approach to develop a simple and accurate power consumption and temperature monitoring system. In this work we focused our attention on the monitoring, measurement of the energy consumption patterns of our data center system, at INCDTIM Cluj-Napoca, Romania.


2020 ◽  
Vol 11 (3) ◽  
pp. 42-65
Author(s):  
Nitin S. More ◽  
Rajesh B. Ingle

The advancements in virtual machine migration (VMM) have been trending due to its effective load balancing features in cloud infrastructure. Previously, data centers were used for handling VMs organized in racks. These racks are arranged in a spanning tree topology with a high bandwidth. Thus, the cost for moving the data between servers is highest when the racks are far from each other. This work addresses this issue and proposed VMM strategy based on self-adaptive D-Crow algorithm (S-DCrow) that incorporates adaptive constants in Dragonfly-based Crow (D-Crow) optimization algorithm based on the proposed topology model. The proposed S-DCrow describes a migrating model, which is based on topology, energy consumption, load, and migration cost. Here, the network is organized in a spanning tree topology and is adapted by proposed S-DCrow for optimal VMM. The performance of the proposed S-DCrow shows superior performance in terms of load, energy consumption, and migration cost with the values of 0.1417, 0.1009, and 0.1220, respectively.


Computers ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 10
Author(s):  
Manal A. El Sayed ◽  
El Sayed M. Saad ◽  
Rasha F. Aly ◽  
Shahira M. Habashy

Multi-core processors have become widespread computing engines for recent embedded real-time systems. Efficient task partitioning plays a significant role in real-time computing for achieving higher performance alongside sustaining system correctness and predictability and meeting all hard deadlines. This paper deals with the problem of energy-aware static partitioning of periodic, dependent real-time tasks on a homogenous multi-core platform. Concurrent access of the tasks to shared resources by multiple tasks running on different cores induced a higher blocking time, which increases the worst-case execution time (WCET) of tasks and can cause missing the hard deadlines, consequently resulting in system failure. The proposed blocking-aware-based partitioning (BABP) algorithm aims to reduce the overall energy consumption while avoiding deadline violations. Compared to existing partitioning strategies, the proposed technique achieves more energy-saving. A series of experiments test the capabilities of the suggested algorithm compared to popular heuristics partitioning algorithms. A comparison was made between the most used bin-packing algorithms and the proposed algorithm in terms of energy consumption and system schedulability. Experimental results demonstrate that the designed algorithm outperforms the Worst Fit Decreasing (WFD), Best Fit Decreasing (BFD), and Similarity-Based Partitioning (SBP) algorithms of bin-packing algorithms, reduces the energy consumption of the overall system, and improves schedulability.


Author(s):  
Mahendra Kumar Gourisaria ◽  
S. S. Patra ◽  
P. M. Khilar

<p>Cloud computing is an emerging field of computation. As the data centers consume large amount of power, it increases the system overheads as well as the carbon dioxide emission increases drastically. The main aim is to maximize the resource utilization by minimizing the power consumption. However, the greatest usages of resources does not mean that there has been a right use of energy.  Various resources which are idle, also consumes a significant amount of energy. So we have to keep minimum resources idle. Current studies have shown that the power consumption due to unused computing resources is nearly 1 to 20%. So, the unused resources have been assigned with some of the tasks to utilize the unused period. In the present paper, it has been suggested that the energy saving with task consolidation which has been saved the energy by minimizing the number of idle resources in a cloud computing environment. It has been achieved far-reaching experiments to quantify the performance of the proposed algorithm. The same has also been compared with the FCFSMaxUtil and Energy aware Task Consolidation (ETC) algorithm. The outcomes have shown that the suggested algorithm surpass the FCFSMaxUtil and ETC algorithm in terms of the CPU utilization and energy consumption.</p>


Author(s):  
Xiao Ma ◽  
◽  
Zhongbao Zhang ◽  
Sen Su

Recently, the concept of virtual data center (VDC) has attracted significant attention from researchers. VDC is made up of virtual nodes and virtual links with guaranteed bandwidth. It offers elasticity and flexibility, which means VDC can adjust resources dynamically according to different requirements. Existing studies focus on how to design the optimal embedding algorithm to achieve high success rate for the virtual data center request. However, due to the resource of physical data center changes over time, the optimal solution may become sub-optimal. In this paper, we study the problem of virtual data center migration and propose an energy-aware virtual data center migration algorithm, called CA-VDCM-ACO. This novel algorithm leverages the migration technique to further reduce the energy consumption with the success rate for the physical data center guaranteed. The extensive experiments show that our algorithm is very effective to reduce the energy consumption.


Sign in / Sign up

Export Citation Format

Share Document