Cooling of Data Centers Using Airside Economizers

Author(s):  
Saket Karajgikar ◽  
Veerendra Mulay ◽  
Dereje Agonafer ◽  
Roger Schmidt

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. These rising heat load trends in data center facilities have raised concerns over energy usage. The environmental protection agency has reported that the energy used in 2006 by data center industry was 1.5% of the total energy usage by entire nation. The experts agree that by year 2010, this usage will approach 2% of the annual energy use nationwide. This has been the driving force behind the new solutions or technologies such as free cooling. Recent studies show that the outside air can be drawn in to cool the IT equipment without any undue electronic component failure due to contaminant. In this paper, different cases employing air side economizer are discussed. Numerical models are created to study the qualitative impact of the solution under various operating conditions.

Author(s):  
Veerendra Mulay ◽  
Dereje Agonafer ◽  
Gary Irwin ◽  
Darshan Patell

Rising heat load trends in data center facilities have raised concerns over energy usage. The environmental protection agency has reported that the energy used in 2006 by data center industry was 1.5% of the total energy usage by entire nation. The experts agree that by year 2010, this usage will approach 2% of the annual energy use nationwide. Although many new concepts such as airside economizers and cogeneration are gaining traction, many data center facilities spend considerable energy in cooling. In this study, various cabinet designs are discussed. Isolating the supplied cold air from hot exhaust air is always a challenge in thermal management of data center facilities. A cabinet design that employs chimney to aid the isolation of hot and cold air is discussed. A computational model of representative data center is created to study the effectiveness of design under various supply air fractions.


Author(s):  
Dan Comperchio ◽  
Sameer Behere

Data center energy consumption can be divided into three broad categories: Information Technology (IT), Electrical, and Mechanical. An efficient data center uses the least amount of non-IT energy, which is typically divided between the mechanical and electrical systems. Mechanical systems generally contribute a large portion of the non-IT energy use by providing cooling from compressor-based equipment [1,2] and because of this, strategies to reduce compressor energy consumption can lead to significant mechanical system energy savings. The most efficient way to reduce compressor energy is through elimination or significant reduction in annual runtime. This is possible with the use of integrated airside or waterside economizers. This paper demonstrates the impacts of economization in data centers through data collected from four operating facilities over the course of implementing various economizer improvement projects. System architectures include water-cooled centrifugal chiller plant with waterside economization, direct expansion air handling units (AHU) with airside economization, air-cooled centrifugal chillers with integrated waterside economization, and direct expansion computer room air conditioners (CRAC) with evaporative cooling and waterside economization. A systematic and methodical comparison of the baseline and post-conditions is discussed, comparing expected to observed economizer operating conditions. The comparison of multiple real-world scenarios revealed a range of variances in expected operation of economizer sequences to actual observations, indicating a need for close monitoring of system performance by data center operators to fully realize economizer benefits within facilities.


Author(s):  
Veerendra Mulay ◽  
Dereje Agonafer ◽  
Roger Schmidt

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. Thermal management of such dense data center clusters using liquid cooling is presented.


2014 ◽  
Vol 1044-1045 ◽  
pp. 1159-1162
Author(s):  
Chang Rong Ge ◽  
Xian Qing Zheng

Data centers comprised of racks and corresponding servers form the backbone of today’s cloud computing. Continuous operation of large numbers of servers can generate large amounts of heat which in turn requires high capacity cooling systems. These cooling systems can consume a significant portion of the energy required to run a data center and can negatively impact data center efficiency. With increasing operational costs, the data center industry has started to actively search for efficiency improvements to slow down the rising cost of running services. One of the most prominent ways to cut power consumption is free cooling, which uses outside air for cooling IT equipment completely or part of the time. This system is often used in colder climates and requires less energy since it doesn’t use compressors for cooling incoming air. This paper presents an evaluation of Shanghai as a data center location. As temperature affects data center cooling consumption, focus of the paper is on evaluating optimal operating conditions for local data centers.


Author(s):  
Milton Meckler

What does remain a growing concern for many users of Data Centers is their continuing availability following the explosive growth of internet services in recent years, The recent maximizing of Data Center IT virtualization investments has resulted in improving the consolidation of prior (under utilized) server and cabling resources resulting in higher overall facility utilization and IT capacity. It has also resulted in excessive levels of equipment heat release, e.g. high energy (i.e. blade type) servers and telecommunication equipment, that challenge central and distributed air conditioning systems delivering air via raised floor or overhead to rack mounted servers arranged in alternate facing cold and hot isles (in some cases reaching 30 kW/rack or 300 W/ft2) and returning via end of isle or separated room CRAC units, which are often found to fight each other, contributing to excessive energy use. Under those circumstances, hybrid, indirect liquid cooling facilities are often required to augment above referenced air conditioning systems in order to prevent overheating and degradation of mission critical IT equipment to maintain rack mounted subject rack mounted server equipment to continue to operate available within ASHRAE TC 9.9 prescribed task psychometric limits and IT manufacturers specifications, beyond which their operational reliability cannot be assured. Recent interest in new web-based software and secure cloud computing is expected to further accelerate the growth of Data Centers which according to a recent study, the estimated number of U.S. Data Centers in 2006 consumed approximately 61 billion kWh of electricity. Computer servers and supporting power infrastructure for the Internet are estimated to represent 1.5% of all electricity generated which along with aggregated IT and communications, including PC’s in current use have also been estimated to emit 2% of global carbon emissions. Therefore the projected eco-footprint of Data Centers into the future has now become a matter of growing concern. Accordingly our paper will focus on how best to improve the energy utilization of fossil fuels that are used to power Data Centers, the energy efficiency of related auxiliary cooling and power infrastructures, so as to reduce their eco-footprint and GHG emissions to sustainable levels as soon as possible. To this end, we plan to demonstrate significant comparative savings in annual energy use and reduction in associated annual GHG emissions by employing a on-site cogeneration system (in lieu of current reliance on remote electric power generation systems), introducing use of energy efficient outside air (OSA) desiccant assisted pre-conditioners to maintain either Class1, Class 2 and NEBS indoor air dew-points, as needed, when operated with modified existing (sensible only cooling and distributed air conditioning and chiller systems) thereby eliminating need for CRAC integral unit humidity controls while achieving a estimated 60 to 80% (virtualized) reduction in the number servers within a existing (hypothetical post-consolidation) 3.5 MW demand Data Center located in southeastern (and/or southern) U.S., coastal Puerto Rico, or Brazil characterized by three (3) representative microclimates ranging from moderate to high seasonal outside air (OSA) coincident design humidity and temperature.


Author(s):  
Thomas J. Breen ◽  
Ed J. Walsh ◽  
Jeff Punch ◽  
Amip J. Shah ◽  
Niru Kumari ◽  
...  

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.


Energies ◽  
2020 ◽  
Vol 13 (22) ◽  
pp. 6147
Author(s):  
Jinkyun Cho ◽  
Jesang Woo ◽  
Beungyong Park ◽  
Taesub Lim

Removing heat from high-density information technology (IT) equipment is essential for data centers. Maintaining the proper operating environment for IT equipment can be expensive. Rising energy cost and energy consumption has prompted data centers to consider hot aisle and cold aisle containment strategies, which can improve the energy efficiency and maintain the recommended level of inlet air temperature to IT equipment. It can also resolve hot spots in traditional uncontained data centers to some degree. This study analyzes the IT environment of the hot aisle containment (HAC) system, which has been considered an essential solution for high-density data centers. The thermal performance was analyzed for an IT server room with HAC in a reference data center. Computational fluid dynamics analysis was conducted to compare the operating performances of the cooling air distribution systems applied to the raised and hard floors and to examine the difference in the IT environment between the server rooms. Regarding operating conditions, the thermal performances in a state wherein the cooling system operated normally and another wherein one unit had failed were compared. The thermal performance of each alternative was evaluated by comparing the temperature distribution, airflow distribution, inlet air temperatures of the server racks, and recirculation ratio from the outlet to the inlet. In conclusion, the HAC system with a raised floor has higher cooling efficiency than that with a hard floor. The HAC with a raised floor over a hard floor can improve the air distribution efficiency by 28%. This corresponds to 40% reduction in the recirculation ratio for more than 20% of the normal cooling conditions. The main contribution of this paper is that it realistically implements the effectiveness of the existing theoretical comparison of the HAC system by developing an accurate numerical model of a data center with a high-density fifth-generation (5G) environment and applying the operating conditions.


Climate ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. 110
Author(s):  
Alexandre F. Santos ◽  
Pedro D. Gaspar ◽  
Heraldo J. L. de Souza

Data Centers (DC) are specific buildings that require large infrastructures to store all the information needed by companies. All data transmitted over the network is stored on CDs. By the end of 2020, Data Centers will grow 53% worldwide. There are methodologies that measure the efficiency of energy consumption. The most used metric is the Power Usage Effectiveness (PUE) index, but it does not fully reflect efficiency. Three DC’s located at the cities of Curitiba, Londrina and Iguaçu Falls (Brazil) with close PUE values, are evaluated in this article using the Energy Usage Effectiveness Design (EUED) index as an alternative to the current method. EUED uses energy as a comparative element in the design phase. Infrastructure consumption is the sum of energy with Heating, Ventilating and Air conditioning (HVAC) equipment, equipment, lighting and others. The EUED values obtained were 1.245 (kWh/yr)/(kWh/yr), 1.313 (kWh/yr)/(kWh/yr) and 1.316 (kWh/yr)/(kWh/yr) to Curitiba, Londrina and Iguaçu Falls, respectively. The difference between the EUED and the PUE Constant External Air Temperature (COA) is 16.87% for Curitiba, 13.33% for Londrina and 13.30% for Iguaçu Falls. The new Perfect Design Data center (PDD) index prioritizes efficiency in increasing order is an easy index to interpret. It is a redefinition of EUED, given by a linear equation, which provides an approximate result and uses a classification table. It is a decision support index for the location of a Data Center in the project phase.


2009 ◽  
Vol 2009.19 (0) ◽  
pp. 440-443
Author(s):  
Ryuichi NISHIDA ◽  
Tsuneo UEKUSA ◽  
Shisei WARAGAI ◽  
Keisuke SEKIGUCHI

Author(s):  
Veerendra Mulay ◽  
Saket Karajgikar ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Madhusudan Iyengar

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in these situations. A parametric study of such solution is presented in this paper. A representative data center with 40 racks is modeled using commercially available CFD code. The variation in rack inlet temperature due to tile openings, underfloor plenum depths is reported.


Sign in / Sign up

Export Citation Format

Share Document