Effect of Data Center Layout on Rack Inlet Air Temperatures

Author(s):  
Roger Schmidt ◽  
Madhusudan Iyengar

The heat dissipated by large servers and switching equipment is reaching levels that make it very difficult to cool these systems in data centers or telecommunications rooms. Some of the highest powered systems are dissipating upwards of 4000 watts/ft2(43,000 watts/m2) based on the equipment footprint. When systems dissipate this amount of heat and then are clustered together within a data center significant cooling challenges can result. This paper describes the thermal profile of 3 data center layouts (2 are of the same data center but different points in time with a different layout). Detailed measurements of all three were taken: electronic equipment power usage; perforated floor tile airflow; cable cutout airflow; computer room air conditioning (CRAC) airflow, temperatures and power usage; electronic equipment inlet air temperatures. Although the detailed measurements were recorded this paper will focus at the macro level results of the data center to see if some patterns present themselves that might be helpful for future guidelines of data center layout for optimized cooling. Specifically, areas of the data center where racks have similar inlet air temperatures are examined relative to the rack and CRAC unit layout.

Author(s):  
Roger Schmidt ◽  
Madhusudan Iyengar ◽  
Joe Caricari

With the ever increasing heat dissipated by IT equipment housed in data centers it is becoming more important to project the changes that can occur in the data center as the newer higher powered hardware is installed. The computational fluid dynamics (CFD) software that is available has improved over the years and some CFD software specific to data center thermal analysis has been developed. This has improved the timeliness of providing some quick analysis of the effects of new hardware into the data center. But it is critically important that this software provide a good report to the user of the effects of adding this new hardware. And it is the purpose of this paper to examine a large cluster installation and compare the CFD analysis with environmental measurements obtained from the same site. This paper shows measurements and CFD analysis of high powered racks as high as 27 kW clustered such that heat fluxes in some regions of the data center exceeded 700 Watts/ft2 (7535 W/m2). This paper describes the thermal profile of a high performance computing cluster located in an IBM data center and a comparison of that cluster modeled with CFD software. The high performance Advanced Simulation and Computing (ASC) cluster, developed and manufactured by IBM, is code named ASC Purple. It is the World’s 3rd fastest supercomputer [1], operating at a peak performance of 77.8 TFlop/s. ASC Purple, which employs IBM pSeries p575, Model 9118, contains more than 12,000 processors, 50 terabytes of memory, and 2 petabytes of globally accessible disk space. The cluster was first tested in the IBM development lab in Poughkeepsie, NY and then shipped to Lawrence Livermore National Labs in Livermore, California where it was installed to support our national security mission. Detailed measurements were taken in both data centers of electronic equipment power usage, perforated floor tile airflow, cable cutout airflow, computer room air conditioning (CRAC) airflow, and electronic equipment inlet air temperatures and were report in Schmidt [2], but only the IBM Poughkeepsie results will be reported here along with a comparison to CFD modeling results. In some areas of the Poughkeepsie data center there were regions that did exceed the equipment inlet air temperature specifications by a significant amount. These areas will be highlighted and reasons given on why these areas failed to meet the criteria. The modeling results by region showed trends that compared somewhat favorably but some rack thermal profiles deviated quite significantly from measurements.


Author(s):  
Vikneshan Sundaralingam ◽  
Yogendra Joshi ◽  
Vaibhav Arghode

Conventionally, raised floor data centers operate using controllers that only maintain constant data center space conditions (i.e. supply air temperatures or return air temperatures) at the Computer Room Air Conditioning (CRAC) Unit level with the intention of providing enough cooling for the servers. The objective of this paper is to explore the framework required to design a controller that regulates server CPU temperatures by specifying the supply air temperature of the CRAC. The controller will be an addition to the existing controller used by the CRAC to regulate supply air temperatures. In the discrete-time domain, implementation and the performance of the modified integral action controller is analyzed and other important parameters are compared. As a preliminary attempt, the controller will be designed for a standard 44U cabinet with 42 “1U” servers, where the machines will execute a prescribed compute load variation: (1) step increase in all compute loads and (2) scaled down representation for a rack of servers using utilization trends from one of Google’s data centers. Ultimately, the development of this controller is motivated by the growing interest in Data Center Infrastructure Management (DCIM) where the IT level and facility level information are both used to intelligently plan and manage resources of a data center.


Author(s):  
Chris Muller ◽  
Chuck Arent ◽  
Henry Yu

Abstract Lead-free manufacturing regulations, reduction in circuit board feature sizes and the miniaturization of components to improve hardware performance have combined to make data center IT equipment more prone to attack by corrosive contaminants. Manufacturers are under pressure to control contamination in the data center environment and maintaining acceptable limits is now critical to the continued reliable operation of datacom and IT equipment. This paper will discuss ongoing reliability issues with electronic equipment in data centers and will present updates on ongoing contamination concerns, standards activities, and case studies from several different locations illustrating the successful application of contamination assessment, control, and monitoring programs to eliminate electronic equipment failures.


Author(s):  
Siddharth Bhopte ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Bahgat Sammakia

In a typical raised floor data center with alternating hot and cold aisles, air enters the front of each rack over the entire height of the rack. Since the heat loads of data processing equipment continues to increase at a rapid rate, it is a challenge to maintain the temperature within the requirements as stated for all the racks within the data center. A facility manager has discretion in deciding the data center room layout, but a wrong decision will eventually lead to equipment failure. There are many complex decisions to be made early in the design as the data center evolves. Challenges occur such as optimizing the raised floor plenum, floor tile placement, minimizing the data center local hot spots etc. These adjustments in configuration affects rack inlet air temperatures which is one of the important key to effective thermal management. In this paper, a raised floor data center with 4.5 kW racks is considered. There are four rows of racks with alternating hot and cold aisle arrangement. Each row has six racks installed. Two CRAC units supply chilled air to the data center through the pressurized plenum. Effect of plenum depth, floor tile placement and ceiling height on the rack inlet air temperature is discussed. Plots will be presented over the defined range. Now a multi-variable approach to optimize data center room layout to minimize the rack inlet air temperature is proposed. Significant improvement over the initial model is shown by using multi-variable design optimization approach. The results of multi-variable design optimization are used to present guidelines for optimal data center performance.


Author(s):  
Milton Meckler

What does remain a growing concern for many users of Data Centers is their continuing availability following the explosive growth of internet services in recent years, The recent maximizing of Data Center IT virtualization investments has resulted in improving the consolidation of prior (under utilized) server and cabling resources resulting in higher overall facility utilization and IT capacity. It has also resulted in excessive levels of equipment heat release, e.g. high energy (i.e. blade type) servers and telecommunication equipment, that challenge central and distributed air conditioning systems delivering air via raised floor or overhead to rack mounted servers arranged in alternate facing cold and hot isles (in some cases reaching 30 kW/rack or 300 W/ft2) and returning via end of isle or separated room CRAC units, which are often found to fight each other, contributing to excessive energy use. Under those circumstances, hybrid, indirect liquid cooling facilities are often required to augment above referenced air conditioning systems in order to prevent overheating and degradation of mission critical IT equipment to maintain rack mounted subject rack mounted server equipment to continue to operate available within ASHRAE TC 9.9 prescribed task psychometric limits and IT manufacturers specifications, beyond which their operational reliability cannot be assured. Recent interest in new web-based software and secure cloud computing is expected to further accelerate the growth of Data Centers which according to a recent study, the estimated number of U.S. Data Centers in 2006 consumed approximately 61 billion kWh of electricity. Computer servers and supporting power infrastructure for the Internet are estimated to represent 1.5% of all electricity generated which along with aggregated IT and communications, including PC’s in current use have also been estimated to emit 2% of global carbon emissions. Therefore the projected eco-footprint of Data Centers into the future has now become a matter of growing concern. Accordingly our paper will focus on how best to improve the energy utilization of fossil fuels that are used to power Data Centers, the energy efficiency of related auxiliary cooling and power infrastructures, so as to reduce their eco-footprint and GHG emissions to sustainable levels as soon as possible. To this end, we plan to demonstrate significant comparative savings in annual energy use and reduction in associated annual GHG emissions by employing a on-site cogeneration system (in lieu of current reliance on remote electric power generation systems), introducing use of energy efficient outside air (OSA) desiccant assisted pre-conditioners to maintain either Class1, Class 2 and NEBS indoor air dew-points, as needed, when operated with modified existing (sensible only cooling and distributed air conditioning and chiller systems) thereby eliminating need for CRAC integral unit humidity controls while achieving a estimated 60 to 80% (virtualized) reduction in the number servers within a existing (hypothetical post-consolidation) 3.5 MW demand Data Center located in southeastern (and/or southern) U.S., coastal Puerto Rico, or Brazil characterized by three (3) representative microclimates ranging from moderate to high seasonal outside air (OSA) coincident design humidity and temperature.


Climate ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 110
Author(s):  
Alexandre F. Santos ◽  
Pedro D. Gaspar ◽  
Heraldo J. L. de Souza

Data Centers (DC) are specific buildings that require large infrastructures to store all the information needed by companies. All data transmitted over the network is stored on CDs. By the end of 2020, Data Centers will grow 53% worldwide. There are methodologies that measure the efficiency of energy consumption. The most used metric is the Power Usage Effectiveness (PUE) index, but it does not fully reflect efficiency. Three DC’s located at the cities of Curitiba, Londrina and Iguaçu Falls (Brazil) with close PUE values, are evaluated in this article using the Energy Usage Effectiveness Design (EUED) index as an alternative to the current method. EUED uses energy as a comparative element in the design phase. Infrastructure consumption is the sum of energy with Heating, Ventilating and Air conditioning (HVAC) equipment, equipment, lighting and others. The EUED values obtained were 1.245 (kWh/yr)/(kWh/yr), 1.313 (kWh/yr)/(kWh/yr) and 1.316 (kWh/yr)/(kWh/yr) to Curitiba, Londrina and Iguaçu Falls, respectively. The difference between the EUED and the PUE Constant External Air Temperature (COA) is 16.87% for Curitiba, 13.33% for Londrina and 13.30% for Iguaçu Falls. The new Perfect Design Data center (PDD) index prioritizes efficiency in increasing order is an easy index to interpret. It is a redefinition of EUED, given by a linear equation, which provides an approximate result and uses a classification table. It is a decision support index for the location of a Data Center in the project phase.


Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

As heat dissipation in data centers rises by orders of magnitude, inefficiencies such as recirculation will have an increasingly significant impact on the thermal manageability and energy efficiency of the cooling infrastructure. For example, prior work has shown that for simple data centers with a single Computer Room Air-Conditioning (CRAC) unit, an operating strategy that fails to account for inefficiencies in the air space can result in suboptimal performance. To enable system-wide optimality, an exergy-based approach to CRAC control has previously been proposed. However, application of such a strategy in a real data center environment is limited by the assumptions inherent to the single-CRAC derivation. This paper addresses these assumptions by modifying the exergy-based approach to account for the additional interactions encountered in a multi-component environment. It is shown that the modified formulation provides the framework necessary to evaluate performance of multi-component data center thermal management systems under widely different operating circumstances.


2014 ◽  
Vol 602-605 ◽  
pp. 928-932
Author(s):  
Min Li ◽  
Yun Wang ◽  
Zheng Qian Feng ◽  
Wang Li

By studying the energy-saving technologies of air-conditioning system in data centers, we designed a intelligent air conditioning system, improved the cooling efficiency of air conditioning system through a reasonable set of hot and cold aisles, reduced the running time of HVAC by using the intelligent heat exchange system, an provided a reference for energy saving research of air conditioning system of data centers.


Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Russell Tipton ◽  
Bruce Murray ◽  
Bahgat G. Sammakia ◽  
...  

The heat dissipated by high performance IT equipment such as servers and switches in data centers is increasing rapidly, which makes the thermal management even more challenging. IT equipment is typically designed to operate at a rack inlet air temperature ranging between 10 °C and 35 °C. The newest published environmental standards for operating IT equipment proposed by ASHARE specify a long term recommended dry bulb IT air inlet temperature range as 18°C to 27°C. In terms of the short term specification, the largest allowable inlet temperature range to operate at is between 5°C and 45°C. Failure in maintaining these specifications will lead to significantly detrimental impacts to the performance and reliability of these electronic devices. Thus, understanding the cooling system is of paramount importance for the design and operation of data centers. In this paper, a hybrid cooling system is numerically modeled and investigated. The numerical modeling is conducted using a commercial computational fluid dynamics (CFD) code. The hybrid cooling strategy is specified by mounting the in row cooling units between the server racks to assist the raised floor air cooling. The effect of several input variables, including rack heat load and heat density, rack air flow rate, in row cooling unit operating cooling fluid flow rate and temperature, in row coil effectiveness, centralized cooling unit supply air flow rate, non-uniformity in rack heat load, and raised floor height are studied parametrically. Their detailed effects on the rack inlet air temperatures and the in row cooler performance are presented. The modeling results and corresponding analyses are used to develop general installation and operation guidance for the in row cooler strategy of a data center.


Author(s):  
Prabjit Singh ◽  
Levente Klein ◽  
Dereje Agonafer ◽  
Jimil M. Shah ◽  
Kanan D. Pujara

The energy used by information technology (IT) equipment and the supporting data center equipment keeps rising as data center proliferation continues unabated. In order to contain the rising computing costs, data center administrators are resorting to cost cutting measures such as not tightly controlling the temperature and humidity levels and in many cases installing air side economizers with the associated risk of introducing particulate and gaseous contaminations into their data centers. The ASHRAE TC9.9 subcommittee, on Mission Critical Facilities, Data Centers, Technology Spaces, and Electronic Equipment, has accommodated the data center administrators by allowing short period excursions outside the recommended temperature-humidity range, into allowable classes A1-A3. Under worst case conditions, the ASHRAE A3 envelope allows electronic equipment to operate at temperature and humidity as high as 24°C and 85% relative humidity for short, but undefined periods of time. This paper addresses the IT equipment reliability issues arising from operation in high humidity and high temperature conditions, with particular attention paid to the question of whether it is possible to determine the all-encompassing x-factors that can capture the effects of temperature and relative humidity on equipment reliability. The role of particulate and gaseous contamination and the aggravating effects of high temperature and high relative humidity will be presented and discussed. A method to determine the temperature and humidity x-factors, based on testing in experimental data centers located in polluted geographies, will be proposed.


Sign in / Sign up

Export Citation Format

Share Document