Inviscid and Viscous Numerical Models Compared to Experimental Data in a Small Data Center Test Cell

2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Ethan Cruz ◽  
Yogendra Joshi

Both localized power densities and overall power consumption within the data center continue to rise, following the same upward trend as the information technology (IT) equipment stored within the data center. Air cooling this increasing power has proved a significant challenge at both the IT equipment and data center level. In order to combat this challenge, computational fluid dynamics and heat transfer (CFD/HT) models have been employed as the dominant technique for the design and optimization of both new and existing data centers. This study is a continuation of earlier comparisons of CFD/HT models to experimentally measured temperature and flow fields in a small data center test cell. It compares an inviscid model, a laminar flow model, and three turbulence models to six sets of experimentally collected data. The six sets of data are from two different IT equipment rack power dissipations using three different layouts of perforated tiles. Insight into the location of the deviation between the different CFD/HT models and experimental data are discussed, along with the computational effort involved in running the models. A new grid analysis was performed on the different CFD/HT models in order to try to minimize computational effort. The inviscid model was able to run with a smaller grid size than the viscous models and even for the same size grid was found to run 30% faster than the fastest viscous model. Due to both the reduced grid size and computational effort (due to the simpler equation set), the inviscid model ran over thirty times faster than the next fastest model. The fact that the inviscid model ran the fastest is not surprising, however what was not expected is that the inviscid model was also found to have the smallest deviations from the experimental data for all six of the cases. This is most likely due to the arrangement of the data center test cell with the relatively few high velocity air jets and large open space around the IT equipment. More tightly packed data centers with higher air velocities and turbulent mixing conditions will certainly produce different results than those found in this study.

Author(s):  
Ethan Cruz ◽  
Yogendra Joshi ◽  
Madhusudan Iyengar ◽  
Roger Schmidt

Information Technology (IT) equipment compaction has become a significant air cooling challenge at the data center level. Computational Fluid Dynamics and Heat Transfer (CFD/HT) models have been employed as the dominant technique for the design and optimization of both new and existing data centers. Understanding the limitations of a CFD/HT models’ ability to predict the actual data center temperature field and flow characteristics becomes a critical component in optimizing the actual data center rather than optimizing the model of the data center. This is most important near the IT equipment where temperature and flow specifications from the IT equipment manufacturers must be maintained for reliable operation. This study is a continuation of earlier comparisons of CFD/HT models to experimentally measured temperature and flow fields in a small data center test cell. This study compares the experimentally collected data for three different layouts of perforated tiles to a CFD/HT model with seven turbulence models not previously evaluated. Insight into the location of the deviation between the different turbulence models and experimental data are discussed, along with the computational effort involved in running the CFD/HT models. It was found that the zero equation (or mixing length model) and the Spalart-Allamaras turbulence models produced the smallest deviations from experimental data, but the former required only a fifth of the computational effort of the latter. The laminar flow model required the least computational effort, running more than twice as fast as the zero equation turbulence model, and produced deviations similar to those of the six different k-ε turbulence models.


Author(s):  
Ethan Cruz ◽  
Yogendra Joshi ◽  
Madhusudan Iyengar ◽  
Roger Schmidt

As the performance of Information Technology (IT) equipment continues to rise, so do the power dissipated and overall power density. Air cooling this increasing power has proved a significant challenge even at the data center level. In order to combat this challenge, Computational Fluid Dynamics and Heat Transfer (CFD/HT) models have been employed as the dominant technique for the design and optimization of both new and existing data centers. This study is a continuation of earlier comparisons of CFD/HT models to experimentally measured temperature and flow fields in a small data center test cell. It compares previously unpublished experimentally collected data for the 11 kW dissipation cases using three different layouts of perforated tiles to a CFD/HT model using eight turbulence models and a laminar flow model. Insight into the location of the deviation between the different turbulence models and experimental data are discussed, along with the computational effort involved in running the CFD/HT models. It was found that the laminar flow model and the Spalart-Allamaras turbulence model produced the smallest deviations from experimental data, but the former required only one twentieth of the computational effort of the latter.


2020 ◽  
Vol 142 (2) ◽  
Author(s):  
Oluwaseun Awe ◽  
Jimil M. Shah ◽  
Dereje Agonafer ◽  
Prabjit Singh ◽  
Naveen Kannan ◽  
...  

Abstract Airside economizers lower the operating cost of data centers by reducing or eliminating mechanical cooling. It, however, increases the risk of reliability degradation of information technology (IT) equipment due to contaminants. IT Equipment manufacturers have tested equipment performance and guarantee the reliability of their equipment in environments within ISA 71.04-2013 severity level G1 and the ASHRAE recommended temperature-relative humidity (RH) envelope. IT Equipment manufacturers require data center operators to meet all the specified conditions consistently before fulfilling warranty on equipment failure. To determine the reliability of electronic hardware in higher severity conditions, field data obtained from real data centers are required. In this study, a corrosion classification coupon experiment as per ISA 71.04-2013 was performed to determine the severity level of a research data center (RDC) located in an industrial area of hot and humid Dallas. The temperature-RH excursions were analyzed based on time series and weather data bin analysis using trend data for the duration of operation. After some period, a failure was recorded on two power distribution units (PDUs) located in the hot aisle. The damaged hardware and other hardware were evaluated, and cumulative corrosion damage study was carried out. The hypothetical estimation of the end of life of components is provided to determine free air-cooling hours for the site. There was no failure of even a single server operated with fresh air-cooling shows that using evaporative/free air cooling is not detrimental to IT equipment reliability. This study, however, must be repeated in other geographical locations to determine if the contamination effect is location dependent.


Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation IT performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true lifecycle cost of a water cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership for water cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers and direct water cooling via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influence the total cost of ownership of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


Author(s):  
Michael M. Toulouse ◽  
David Lettieri ◽  
Van P. Carey ◽  
Cullen E. Bash

This paper summarizes the comparison of predictions by a compact model of air flow and transport in data centers to temperature measurements of an operational data center. The simplified model and code package, referred to as COMPACT (Compact Model of Potential Flow and Convective Transport), is intended as an alternative to the use of time-intensive full CFD thermofluidic models as a first-order design tool, as well as a potential improvement to plant-based controllers. COMPACT is based on potential flow and combined with an application of convective energy equations, using sparse matrix solvers to seek flow and temperature solutions. Full-room solutions can be generated in 15 seconds on a commercially available laptop, and an accompanying graphical user interface has also been developed to allow quick configuration of data center designs and analysis of flow and temperature results. Experiments for validation of the model were conducted at the HP Labs data center in Palo Alto, CA, which is in a traditional configuration consisting of inlet floor tiles feeding cold air between two rows of multiple server racks. Subsequently, air exits either through ceiling tiles or direct room-return to CRAC units located on the side of the room. Temperatures were recorded at multiple points along entering and exiting flow faces within the room, as well as at various points in cold and hot aisles, and are presented and compared to model predictions to assess their accuracy. Areas of greater and lesser accuracy are analyzed and presented, in addition to conclusions as to the strengths and weaknesses of the model. For some cases, the average predicted temperature along in-flowing rack faces was within one degree of the average measured temperature. However, the differences in temperature are not evenly distributed. The most pronounced variations between the model and room measurements were located in areas above server racks where recirculation was shown to most likely occur. In these areas, the predicted temperature was higher than experimental values; this can likely be attributed to the absence of buoyancy effects in the simplified potential flow model. Adaptations of the model and its configuration standards for more accurate temperature distributions are proposed, as well as investigations into the effect on temperature comparisons to idealized model output by unaccounted heat sources or flow phenomena.


Author(s):  
Veerendra Mulay ◽  
Saket Karajgikar ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Madhusudan Iyengar

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in these situations. A parametric study of such solution is presented in this paper. A representative data center with 40 racks is modeled using commercially available CFD code. The variation in rack inlet temperature due to tile openings, underfloor plenum depths is reported.


Author(s):  
James W. VanGilder ◽  
Xuanhang (Simon) Zhang ◽  
Christopher M. Healey

Potential flow models (PFM) have been implemented for a variety of applications, including data center airflow and temperature estimation. As an approximate solution to the data center room physics, potential flow models have great value in their simplicity and the limited computational effort required providing estimates. However, potential flow models lack the ability to capture the effects of buoyancy, which can affect airflow patterns within data centers. We show how this effect can be simulated within PFM; resulting in a model we call Enhanced PFM (EPFM). This model is only marginally more complex to implement than PFM and retains much of the properties of the original PFM, specifically its simplicity and stability. Solution time, about double that of PFM, is still only a small fraction of that of CFD, while empirical tests show a marked improvement in the prediction of key data center temperatures.


Author(s):  
Magnus K. Herrlin ◽  
Michael K. Patterson

Increased Information and Communications Technology (ICT) capability and improved energy-efficiency of today’s server platforms have created opportunities for the data center operator. However, these platforms also test the ability of many data center cooling systems. New design considerations are necessary to effectively cool high-density data centers. Challenges exist in both capital costs and operational costs in the thermal management of ICT equipment. This paper details how air cooling can be used to address both challenges to provide a low Total Cost of Ownership (TCO) and a highly energy-efficient design at high heat densities. We consider trends in heat generation from servers and how the resulting densities can be effectively cooled. A number of key factors are reviewed and appropriate design considerations developed to air cool 2000 W/ft2 (21,500 W/m2). Although there are requirements for greater engineering, such data centers can be built with current technology, hardware, and best practices. The density limitations are shown primarily from an airflow management and cooling system controls perspective. Computational Fluid Dynamics (CFD) modeling is discussed as a key part of the analysis allowing high-density designs to be successfully implemented. Well-engineered airflow management systems and control systems designed to minimize airflow by preventing mixing of cold and hot airflows allow high heat densities. Energy efficiency is gained by treating the whole equipment room as part of the airflow management strategy, making use of the extended environmental ranges now recommended and implementing air-side air economizers.


Author(s):  
Yongzhan He ◽  
Guofeng Chen ◽  
Jiajun Zhang ◽  
Tianyu Zhou ◽  
Tao Liu ◽  
...  

The advent of the big data era, the rapid development of mobile internet, and the rising demand of cloud computing services require increasingly more compute capability from their data center. This compute increase will most likely come from higher rack and room power densities or even construction of new Internet data centers. But an increase in a data center’s business-critical IT equipment (servers, hubs, routers, wiring patch panels, and other network appliances), not to mention the infrastructure needed to keep these devices alive and protected, encroaches on another IT goal: to reduce long-term energy usage. Large Internet Data Centers are looking at every possible way to reduce the cooling cost and improve efficiency. One of the emerging trends in the industry is to move to higher ambient data center operation and use air side economizers. However, these two trends can have significant implications for corrosion risk in data centers. The prevailing practice surrounding the data centers has often been “The colder, the better.” However, some leading server manufacturers and data center efficiency experts share the opinion that data centers can run far hotter than they do today without sacrificing uptime, and with a huge savings in both cooling related costs and CO2 emissions. Why do we need to increase the temperatures? To cool data center requires huge refrigeration system which is energy hog and also cost of cooling infrastructure, maintenance cost and operation cost are heavy cost burden. Ahuja et al [1] studied cooling path management in data center at typical operating temperature as well as higher ambient operating temperatures. High Temperatures and Corrosion Resistance technology will reduce the refrigeration output and how this innovation will open up new direction in data centers. Note that, HTA is not to say that the higher the better. Before embracing HTA two keys points need to be addressed and understood. Firstly, server stability along with optimal temperature from data center perspective. Secondly, corrosion resistant technology. With Fresh air cooling the server has to bear with the seasons and diurnal variation of temperatures and these can be over 35 degree C, therefore to some extent, we have to say, HTA design is the premise of corrosion resistant design. In this paper, we present methods to realize precise HTA operation along with corrosive resistant technology. This is achieved through an orchestrated collaboration between the IT and cooling infrastructures.


Author(s):  
Waleed A. Abdelmaksoud ◽  
H. Ezzat Khalifa ◽  
Thong Q. Dang ◽  
Roger R. Schmidt ◽  
Madhusudan Iyengar

Sign in / Sign up

Export Citation Format

Share Document