Effects of Servers' Rack Location and Power Loading Configurations on the Thermal Management of Data Center Racks' Array

Author(s):  
S. A. Nada ◽  
K. E. Elfeky

Effects of server/rack locations and server loading configurations on the thermal performance of data center racks' array are experimentally investigated using a scaled physical model simulating real data. Front and rear rack temperatures profiles, server temperatures, and performance indices supply/return heat index (SHI/RHI) are used to evaluate the thermal management of the racks' array. The results showed that (i) servers located in high level rack cabinet have the worst thermal performance, (ii) middle racks of the rack row showed optimum thermal performance and energy efficiency, and (iii) locating the servers of high power densities in the middle of the racks row improves the thermal performance and energy efficiency of the racks array.

Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

As heat dissipation in data centers rises by orders of magnitude, inefficiencies such as recirculation will have an increasingly significant impact on the thermal manageability and energy efficiency of the cooling infrastructure. For example, prior work has shown that for simple data centers with a single Computer Room Air-Conditioning (CRAC) unit, an operating strategy that fails to account for inefficiencies in the air space can result in suboptimal performance. To enable system-wide optimality, an exergy-based approach to CRAC control has previously been proposed. However, application of such a strategy in a real data center environment is limited by the assumptions inherent to the single-CRAC derivation. This paper addresses these assumptions by modifying the exergy-based approach to account for the additional interactions encountered in a multi-component environment. It is shown that the modified formulation provides the framework necessary to evaluate performance of multi-component data center thermal management systems under widely different operating circumstances.


Author(s):  
Chandrakant Patel ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Sven Graupner

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.


Author(s):  
Chun Wang ◽  
Xiaoguang Sun ◽  
Jun Zhang ◽  
Nishi Ahuja

Data centers concern not just energy usage, but also other important overall considerations such as the actual computational work, the energy efficiency and the performance of servers. The server, as one of the key ingredients of a data center, plays an increasingly crucial role in contributing to the overall energy use, especially in cases where the efficiency of the infrastructure has been optimized. Baidu has been exploring the sweet zone between power and performance in efficient rack server design and deployment for their self-built data center energy efficiency optimization from all the aspects. Recent deployment of rack server with distributed backup battery (Li-ion) subsystem (BBS) is one typical example to demonstrate their advanced rack server design for energy efficiency. Compared with lead acid battery based traditional UPS in data center, distributed BBS design in Baidu rack server has an advantage in lower power loss, data center power delivery and topology simplification, data center real estate saving, scalable deployment on demand without overprovision and so on, which overall contributes to a total cost ownership (TCO) reduction on both cap-ex (i.e., power infrastructure investment) and op-ex (i.e., electricity bill). This paper introduces overall architecture and design of Baidu rack server with distributed BBS. Furthermore details energy efficiency design methodology of power peak draw trimming based on workload power characterization; Also the related lab data collection, experiments result, ongoing work and future plan are summarized in the end. This paper also recaps TCO saving points benefiting from distributed BBS design into rack server system.


2008 ◽  
Vol 130 (2) ◽  
Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

The modeling of recirculation patterns in air-cooled data centers is of interest to ensure adequate thermal management of computer racks at increased heat densities. Most metrics that describe recirculation are based exclusively on temperature inside the data center, and therefore fail to provide adequate information regarding the energy efficiency of the thermal infrastructure. This paper addresses this limitation through an exergy analysis of the data center thermal management system. The approach recognizes that the mixing of hot and cold streams in the data center airspace is an irreversible process and must therefore lead to a loss of exergy. Experimental validation in a test data center confirms that such an exergy-based characterization in the cold aisle reflects the same recirculation trends as suggested by traditional temperature-based metrics. Further, by extending the exergy-based model to include irreversibilities from other components of the thermal architecture, it becomes possible to quantify the amount of available energy supplied to the cooling system that is being utilized for thermal management purposes. The energy efficiency of the entire data center cooling system can then be collapsed into the single metric of net exergy loss. When evaluated against a ground state of the external ambience, this metric enables an estimate of how much of the energy emitted into the environment could potentially be harnessed in the form of useful work. Thus, this paper successfully demonstrates that the proposed exergy-based approach can provide a foundation upon which the data center cooling system can be simultaneously evaluated for thermal manageability and energy efficiency.


Author(s):  
Cullen Bash ◽  
George Forman

Data center costs for computer power and cooling have been steadily increasing over the past decade. Much work has been done in recent years on understanding how to improve the delivery of cooling resources to IT equipment in data centers, but little attention has been paid to the optimization of heat production by considering the placement of application workload. Because certain physical locations inside the data center are more efficient to cool than others, this suggests that allocating heavy computational workloads onto those servers that are in more efficient places might bring substantial savings. This paper explores this issue by introducing a workload placement metric that considers the cooling efficiency of the environment. Additionally, results from a set of experiments that utilize this metric in a thermally isolated portion of a real data center are described. The results show that the potential savings is substantial and that further work in this area is needed to exploit the savings opportunity.


Author(s):  
Venugopal Gandikota ◽  
Harish Chengalvala ◽  
Amy S. Fleischer ◽  
G. F. Jones

The on-going trend towards increasing device performance while shrinking device size often results in escalating power densities and high operating temperatures. High operating temperatures may lead to reduced reliability and induced thermal stresses. Therefore, it is necessary to employ new and innovative thermal management techniques to maintain a suitable junction temperature at high power densities. For this reason, there is interest in a variety of liquid cooling techniques. This study analyzes a composite material heat sink. The heat sink consists of a very large number of small cross-section fins fabricated from carbon pitch fibers and epoxy. These carbon pitch fibers have a high thermal conductivity along the length of the fin. It is expected that the longer length will result in more heat transfer surface area and a more effective heat sink. This experimental study characterizes the thermal performance of the carbon-fiber heat sink in a two-phase closed loop thermosyphon using FC-72 as the operating fluid. The influence of heat load, thermosyphon fill volume, and condenser operating temperature on the overall thermal performance is examined. The results of this experiment provide significant insight into the possible implementation and benefits of carbon fiber heat sink technology in two-phase flow leading to significant improvements in thermal management strategies for advanced electronics. The carbon fiber heat sink yielded heat transfer coefficients in the range of 1300-1500 W/m2 K for heat fluxes in the range up to 3.2 W/cm2. Resistances in the range of 0.20 K/W – 0.23 K/W were achieved for the same heat fluxes. Condenser temperature and fill ratio did not show a significant effect on any of the results.


Volume 1 ◽  
2004 ◽  
Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

The recent miniaturization of electronic devices and compaction of computer systems will soon lead to data centers with power densities of the order of 300 W/ft2. At these levels, traditional thermal management techniques are unlikely to suffice. To enable the dynamic smart cooling systems necessary for future data centers, an exergetic approach based on the second law of thermodynamics has recently been proposed. However, no experimental data related to this concept is currently available. This paper discusses the development and subsequent validation of an exergy-based computer model at an instrumented data center in Palo Alto, California. The study finds that when appropriately calibrated, such a computational tool can successfully predict information about local and global thermal performance that cannot be perceived intuitively from traditional design methods. Further development of the concept has promising potential for efficient data center thermal management.


2021 ◽  
Vol 1197 (1) ◽  
pp. 012059
Author(s):  
Sreenivas Muthukumaraswamy kamalakannan ◽  
Sravan Ashwin Ananda Murali ◽  
Mohamed Ansari Raja Abdul Malik ◽  
Nithya Anand Saravanan ◽  
B Vimal Kumar ◽  
...  

Abstract Retrofitting of any given structure is done to improve its strength and performance. In most scenarios, retrofitting is done to improve seismic resilience and performance, however in this investigation retrofitting techniques are utilized to change the functional use of a structure, to be more precise, the agenda is to transform a commercial 10 storey structure into a data center building. In order to encompass the additional load parameters, instead of complete reconstruction, retrofitting solutions are to be adopted, which reduces cost, labor and is very energy and environment conscious. The said structure was modelled using the E-Tabs software, with appropriate load parameters. Furthermore, the altered load parameters for the data center would likewise be examined to understand the extent of failure and would help in indicating the appropriate retrofitting solution to ameliorate the situation. This project is cutting edge in the sense that a sustainable approach in the construction industry is sought to, looking at the need for energy efficiency.


Sign in / Sign up

Export Citation Format

Share Document