Optimization of Cold Aisle Isolation Designs for a Data Center With Roofs and Doors Using Slits

Author(s):  
Srujan Gondipalli ◽  
Bahgat Sammakia ◽  
Siddarth Bhopte ◽  
Roger Schmidt ◽  
Madhusudan K. Iyengar ◽  
...  

Data centers are facilities that house large numbers of computer servers that typically dissipate high power. With the rapid increase in the heat flux of such systems, their thermal management represents an economic and environmental challenge that needs to be addressed [2]. Considering the trends of increasing heat loads and heat fluxes, the focus for users is in providing adequate airflow through the equipment at a temperature that meets the manufacturers’ requirements. Data centers house IT equipment in racks typically arranged in rows which face one another. Alternating cold and hot aisles are formed and this pattern is repeated across the data center. This approach helps to separate cold and hot air streams; but this does not always suffice in the separation of cold and hot air. The mixing of hot rack exhaust air with cold supply air, short-circuiting of cold air to the coolers and the recirculation of hot air to racks’ inlet are the common phenomena that lead to thermal inefficiencies in a typical data center. Typically in a raised floor data center, increase in rack inlet air temperature is seen because of the infiltration of hot air into the cold aisle from the top (ceiling of the cold aisle) and from edges or sides. Infiltration can be reduced to a certain extent if cold aisles are isolated from ceiling and hot aisles using partially or fully closed doors with slits to manage the airflow. The key is to redistribute the cold air entering the cold aisle along with any infiltration such that the overall average temperature at the rack inlets is below a predefined level. In this paper, different designs were generated with the criteria of achieving no hotspots, a relatively low pressure drop across the servers and low velocity of the air in the cold aisle based on an actual data center model. Several designs are proposed that meet all of the defined constraints.

Author(s):  
Veerendra Mulay ◽  
Dereje Agonafer ◽  
Gary Irwin ◽  
Darshan Patell

Rising heat load trends in data center facilities have raised concerns over energy usage. The environmental protection agency has reported that the energy used in 2006 by data center industry was 1.5% of the total energy usage by entire nation. The experts agree that by year 2010, this usage will approach 2% of the annual energy use nationwide. Although many new concepts such as airside economizers and cogeneration are gaining traction, many data center facilities spend considerable energy in cooling. In this study, various cabinet designs are discussed. Isolating the supplied cold air from hot exhaust air is always a challenge in thermal management of data center facilities. A cabinet design that employs chimney to aid the isolation of hot and cold air is discussed. A computational model of representative data center is created to study the effectiveness of design under various supply air fractions.


Author(s):  
Babak Fakhim ◽  
Srinarayana Nagarathinam ◽  
Steven W. Armfield ◽  
Masud Behnia

The increase in the number of data centers in the last decade, combined with higher power density racks, has led to a significant increase in the associated total electricity consumption, which is compounded by cooling inefficiencies. Issues, such as hot air recirculation in the data center room environment, provide substantial challenges in thermal manageability. Three operational data centers have been studied to identify the cooling issues. Field measurements of temperature were obtained and were compared to numerical simulations to evaluate the overall thermal behavior of the data centers and to identify the thermal issues.


Author(s):  
Tahir Cader ◽  
Levi Westra ◽  
Andres Marquez

Although semiconductor manufacturers have provided temporary relief with lower-power multi-core microprocessors, OEMs and data center operators continue to push the limits for individual rack power densities. It is not uncommon today for data center operators to deploy multiple 20 kW racks in a facility. Such rack densities are exacerbating the major issues of power and cooling in data centers. Data center operators are now forced to take a hard look at the efficiencies of their data centers. Malone and Belady (2006) have proposed three metrics, i.e., Power Usage Effectiveness (PUE), Data Center Efficiency (DCE), and the Energy-to-Acquisition Cost ratio (EAC), to help data center operators quickly quantify the efficiency of their data centers. In their paper, Malone and Belady present nominal values of PUE across a broad cross-section of data centers. PUE values are presented for data centers at four levels of optimization. One of these optimizations involves the use of Computational Fluid Dynamics (CFD). In the current paper, CFD is used to conduct an in-depth investigation of a liquid-cooled data center that would potentially be housed at the Pacific Northwest National Labs (PNNL). The boundary conditions used in the CFD model are based upon actual measurements on a rack of liquid-cooled servers housed at PNNL. The analysis shows that the liquid-cooled facility could achieve a PUE of 1.57 as compared to a PUE of 3.0 for a typical data center (the lower the PUE, the better, with values below 1.6 approaching ideal). The increase in data center efficiency is also translated into an increase in the amount of IT equipment that can be deployed. At a PUE of 1.57, the analysis shows that 91% more IT equipment can be deployed as compared to the typical data center. The paper will discuss the analysis of the PUE, and will also explore the impact of the raising data center efficiency via the use of multiple cooling technologies and CFD analysis. Complete results of the analyses will be presented in the paper.


Energies ◽  
2019 ◽  
Vol 12 (5) ◽  
pp. 814 ◽  
Author(s):  
Marcel Antal ◽  
Tudor Cioara ◽  
Ionut Anghel ◽  
Radoslaw Gorzenski ◽  
Radoslaw Januszewski ◽  
...  

This paper addresses the problem of data centers’ cost efficiency considering the potential of reusing the generated heat in district heating networks. We started by analyzing the requirements and heat reuse potential of a high performance computing data center and then we had defined a heat reuse model which simulates the thermodynamic processes from the server room. This allows estimating by means of Computational Fluid Dynamics simulations the temperature of the hot air recovered by the heat pumps from the server room allowing them to operate more efficiently. To address the time and space complexity at run-time we have defined a Multi-Layer Perceptron neural network infrastructure to predict the hot air temperature distribution in the server room from the training data generated by means of simulations. For testing purposes, we have modeled a virtual server room having a volume of 48 m3 and two typical 42U racks. The results show that using our model the heat distribution in the server room can be predicted with an error less than 1 °C allowing data centers to accurately estimate in advance the amount of waste heat to be reused and the efficiency of heat pump operation.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Na Huang

 In some data centers, cold air is required to act on the cabinet to achieve cooling requirements, and the mixing of cold air and hot air reduces the utilization efficiency of cold air. In order to solve this problem, a jet cooling model is established to solve the optimal position of the outlet through the movement of cold air.


Author(s):  
Bharathkrishnan Muralidharan ◽  
Saurabh K. Shrivastava ◽  
Mahmoud Ibrahim ◽  
Sami A. Alkharabsheh ◽  
Bahgat G. Sammakia

The use of air containment systems has been a growing trend in the data center industry and is an important energy saving strategy for data center optimization. Cold Aisle Containment (CAC) is one of the most effective passive cooling solutions for high density heat load applications. Cold Aisle Containment provides a physical separation between the cold air and the hot exhaust air by enclosing the cold aisle, preventing hot air recirculation and cold air bypass. This separation provides uniform inlet air temperatures to the servers, which can further contribute to overall data center efficiency. This paper includes the thermal test data for a data center lab with and without a CAC set up. The paper quantifies the thermal impact of implementing a CAC system over an open Hot Aisle/Cold Aisle (HA/CA) arrangement for three different cabinet heat load conditions at two different CRAC (Computer Room Air Conditioner) return air set point conditions. It studies the advantages of CAC over standard HA/CA arrangement. A case study has been presented showing a cooling energy savings of 22% with the use of a CAC system over a standard HA/CA arrangement.


Author(s):  
Amir Radmehr ◽  
Roger R. Schmidt ◽  
Kailash C. Karki ◽  
Suhas V. Patankar

In raised-floor data centers, distributed leakage flow—the airflow through seams between panels on the raised floor—reduces the amount of cooling air available at the inlets of the computer equipment. This airflow must be known to determine the total cooling air requirement in a data center. The amount of distributed leakage flow depends on the area of the seams and the plenum pressure, which, in turn, depends on the amount of airflow into the plenum and the total open area (combined area of perforated tiles, cutouts, and seams between panels) on the raised floor. The goal of this study is to outline a procedure to measure leakage flow, to provide data on the amount of the distributed leakage flow, and to show the quantitative relationship between the leakage flow and the leakage area. It also uses a computational model to calculate the distributed leakage flow, the flow through perforated tiles, and the plenum pressure. The results obtained from the model are verified using the measurements. Such a model can be used for design and maintenance of data centers. The measurements show that the leakage flow in a typical data center is between 5–15% of the available cooling air. The measured quantities were used to estimate the area of the seams; for this data center, it was found to be 0.35% of the floor area. The computational model represents the actual physical scenarios very well. The discrepancy between the calculated and measured values of leakage flow, flow through perforated tiles, and plenum pressure is less than 10%.


Author(s):  
Roger Schmidt ◽  
Aparna Vallury ◽  
Madhusudan Iyengar

The increased focus on green technologies and energy efficiency coupled with the insatiable desire of IT equipment customers for more performance has driven manufacturers to deploy energy efficient technologies in the data centers. This paper describes a technique to achieve significant energy savings by preventing the cold and hot air streams within the data center from mixing. More specifically, techniques will be described that will separate the cool supply air to the server racks and exhaust hot air that returns to the air conditioning units. This separation can be achieved by three types of containment systems — cold aisle containment, hot aisle containment, and server rack exhaust chimneys. The advantages and disadvantages of each technique will be outlined. To show the potential for energy efficiency improvements a case study in deploying a cold aisle containment solution for a 8944 ft2 data center will be presented. This study will show that 59% of the energy required for the computer room air conditioning (CRAC) units used in a traditional open type data center could be saved.


2010 ◽  
Vol 132 (2) ◽  
Author(s):  
Roger Schmidt ◽  
Madhusudan Iyengar ◽  
Joe Caricari

With the ever increasing heat dissipated by information technology (IT) equipment housed in data centers, it is becoming more important to project the changes that can occur in the data center as the newer higher powered hardware is installed. The computational fluid dynamics (CFD) software that is available has improved over the years. CFD software specific to data center thermal analysis has also been developed. This has improved the time lines of providing some quick analysis of the effects of new hardware into the data center. But it is critically important that this software provide a good report to the user of the effects of adding this new hardware. It is the purpose of this paper to examine a large cluster installation and compare the CFD analysis with environmental measurements obtained from the same site. This paper shows measurements and CFD data for high powered racks as high as 27 kW clustered such that heat fluxes in some regions of the data center exceeded 700 W per square foot. This paper describes the thermal profile of a high performance computing cluster located in an data center and a comparison of that cluster modeled via CFD. The high performance advanced simulation and computing (ASC) cluster had a peak performance of 77.8 TFlop/s, and employed more than 12,000 processors, 50 Tbytes of memory, and 2 Pbytes of globally accessible disk space. The cluster was first tested in the manufacturer’s development laboratory in Poughkeepsie, New York, and then shipped to Lawrence Livermore National Laboratory in Livermore, California, where it was installed to support the national security mission of the U.S. Detailed measurements were taken in both data centers and were previously reported. The Poughkeepsie results will be reported here along with a comparison to CFD modeling results. In some areas of the Poughkeepsie data center, there were regions that did exceed the equipment inlet air temperature specifications by a significant amount. These areas will be highlighted and reasons given on why these areas failed to meet the criteria. The modeling results by region showed trends that compared somewhat favorably but some rack thermal profiles deviated quite significantly from measurements.


Author(s):  
Uschas Chowdhury ◽  
Walter Hendrix ◽  
Thomas Craft ◽  
Willis James ◽  
Ankit Sutaria ◽  
...  

Abstract In a data center, electronic equipment such as server and switches dissipate heat and the corresponding cooling systems contribute to typically 25–35% of total energy consumption. The heat load continues to increase as there is a greater need for miniaturization and convergence. In 2014, data centers in the U.S. consumed an estimated 70 billion kWh, representing about 1.8% of total U.S. electricity consumption. Based on current trend estimates, U.S. data centers are projected to consume approximately 73 billion kWh in 2020 [1]. Many research and strategies are adopted to minimize energy cost. The recommended dry bulb temperature for long-term operation and reliability for air cooling is between 18–27°C and the largest allowable inlet temperature range to operate at is between 5°C and 45°C with American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) enabling much broader allowable zones) [2]. But understanding a proper cooling system is very important especially for thermal management of IT equipment with high heat loads such as 1U or 2U multi-core, high-end servers and blade servers which provide more computing per watt. Many problems like high inlet temperature due to the mixing of hot air with cold air, local hot spots, lower system reliability, increased failure, and downtime may occur. Among many other approaches to managing high-density racks, in-row coolers are used in between racks to provide cold air and minimize local hot spots. This paper describes a computational study being performed by applying in-row coolers for different rack power configuration with and without aisle containment. The power, as well as the number of racks, are varied to study the effect of raised inlet temperature for the IT equipment in a Computational Fluid Dynamics (CFD) model developed in 6SigmaRoom with the help of built-in library items. A comparative analysis is also performed for a typical small-sized non-raised facility to investigate the efficacy and limitations of in-row coolers in thermal management of IT equipment with variation in rack heat load and containment. Several other aspects like a parametric study of variable opening areas of duct between racks and in-row coolers, the variation of operating flow rate and failure scenarios are also studied to find proper flow distribution, uniformity of outlet temperature and predict better performance, energy savings and reliability. The results are presented for general guidance for flexible and quick installation and safe operation of in-row coolers to improve thermal efficiency.


Sign in / Sign up

Export Citation Format

Share Document