Expanded Assessment of a Practical Thermally Aware Energy-Optimized Load Placement Strategy for Open-Aisle, Air-Cooled Data Centers

2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Dustin W. Demetriou ◽  
H. Ezzat Khalifa

This paper expands on the work presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) that investigated practical IT load placement options in open-aisle, air-cooled data centers. The study found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove IT load from the servers with the warmest inlet temperature. By considering the holistic optimization of the data center load placement strategy and the cooling infrastructure optimization, for a range of data center IT utilization levels, this study investigated the effect of ambient temperatures on the data center operation, the consolidation of servers by completely shutting them off, a complementary strategy to those presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) for increasing the IT load beginning with servers that have the coldest inlet temperature and finally the development of load placement rules via either static (i.e., during data center benchmarking) or dynamic (using real-time data from the current thermal environment) allocation. In all of these case studies, by using a holistic optimization of the data center and associated cooling infrastructure, a key finding has been that a significant amount of savings in the cooling infrastructure's power consumption is seen by reducing the CRAH's airflow rate. In many cases, these savings can be larger than providing higher temperature chilled water from the refrigeration units. Therefore, the path to realizing the industry's goal of higher IT equipment inlet temperatures to improve energy efficiency should be through both a reduction in air flow rate and increasing supply air temperatures and not necessarily through only higher CRAH supply air temperatures.

Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Russell Tipton ◽  
Bruce Murray ◽  
Bahgat G. Sammakia ◽  
...  

The heat dissipated by high performance IT equipment such as servers and switches in data centers is increasing rapidly, which makes the thermal management even more challenging. IT equipment is typically designed to operate at a rack inlet air temperature ranging between 10 °C and 35 °C. The newest published environmental standards for operating IT equipment proposed by ASHARE specify a long term recommended dry bulb IT air inlet temperature range as 18°C to 27°C. In terms of the short term specification, the largest allowable inlet temperature range to operate at is between 5°C and 45°C. Failure in maintaining these specifications will lead to significantly detrimental impacts to the performance and reliability of these electronic devices. Thus, understanding the cooling system is of paramount importance for the design and operation of data centers. In this paper, a hybrid cooling system is numerically modeled and investigated. The numerical modeling is conducted using a commercial computational fluid dynamics (CFD) code. The hybrid cooling strategy is specified by mounting the in row cooling units between the server racks to assist the raised floor air cooling. The effect of several input variables, including rack heat load and heat density, rack air flow rate, in row cooling unit operating cooling fluid flow rate and temperature, in row coil effectiveness, centralized cooling unit supply air flow rate, non-uniformity in rack heat load, and raised floor height are studied parametrically. Their detailed effects on the rack inlet air temperatures and the in row cooler performance are presented. The modeling results and corresponding analyses are used to develop general installation and operation guidance for the in row cooler strategy of a data center.


Energies ◽  
2020 ◽  
Vol 13 (18) ◽  
pp. 4595
Author(s):  
Naoki Futawatari ◽  
Yosuke Udagawa ◽  
Taro Mori ◽  
Hirofumi Hayama

In data centers, heating, ventilation, and air-conditioning (HVAC) consumes 30–40% of total energy consumption. Of that portion, 26% is attributed to fan power, the ventilation efficiency of which should thus be improved. As an alternative method for experimentations, computational fluid dynamics (CFD) is used. In this study, “parameter tuning”—which aims to improve the prediction accuracy of CFD simulation—is implemented by using the method known as “design of experiments”. Moreover, it is attempted to improve the thermal environment by using a CFD model after parameter tuning. As a result of the parameter tuning, the difference between the result of experimental-measurement results and simulation results for average inlet temperature of information-technology equipment (ITE) installed in the ventilation room of a test data center was within 0.2 °C at maximum. After tuning, the CFD model was used to verify the effect of advanced insulation such as raised-floor fixed panels and show the possibility of reducing fan power by 26% while keeping the recirculation ratio constant. Improving heat-insulation performance is a different approach from the conventional approach (namely, segregating cold/hot airflow) to improving ventilation efficiency, and it is a possible solution to deal with excessive heat generated in data centers.


Author(s):  
Huijing Jiang ◽  
Xinwei Deng ◽  
Vanessa Lopez ◽  
Hendrik Hamann

Energy consumption of data center has increased dramatically due to the massive computing demands driven from every sector of the economy. Hence, data center energy management has become very important for operating data centers within environmental standards while achieving low energy cost. In order to advance the understanding of thermal management in data centers, relevant environmental information such as temperature, humidity and air quality are gathered through a network of real-time sensors or simulated via sophisticated physical models (e.g. computational fluid dynamics models). However, sensor readings of environmental parameters are collected only at sparse locations and thus cannot provide a detailed map of temperature distribution for the entire data center. While the physics models yield high resolution temperature maps, it is often not feasible, due to computational complexity of these models, to run them in real-time, which is ideally required for optimum data center operation and management. In this work, we propose a novel statistical modeling approach to updating physical model outputs in real-time and providing automatic scheduling for re-computing physical model outputs. The proposed method dynamically corrects the discrepancy between a steady-state output of the physical model and real-time thermal sensor data. We show that the proposed method can provide valuable information for data center energy management such as real-time high-resolution thermal maps. Moreover, it can efficiently detect systematic changes in a data center thermal environment, and automatically schedule physical models to be re-executed whenever significant changes are detected.


Author(s):  
Xuanhang Simon Zhang ◽  
James W. VanGilder

A software tool was developed to predict the transient cooling performance of data centers and to explore various alternatives in real-time for data center design and management purposes. Cooling performance can be affected by factors such as room architecture, rack population and layout, connections between cooler fans and UPSs, chilled water pumps and UPSs, the size of the chilled water storage tank, etc. The available transient cooling runtime is mainly dictated by the system stored cooling capacity and the total load in the data center. This paper discusses the transient response of data centers to different design and failure scenarios and details a comprehensive and efficient approach for simulating this performance.


Author(s):  
Madhusudan Iyengar ◽  
Roger Schmidt ◽  
Arun Sharma ◽  
Gerard McVicker ◽  
Saurabh Shrivastava ◽  
...  

Data center equipment almost always represents a high expenditure capital investment to the customer, and is often operated without any down time. Data com equipment is typically designed to operate at a rack air inlet temperature of between 10 and 35°C, and a violation of this specification can diminish electronic device reliability and even lead to failure in the field. Thus, it is of paramount importance, from a reliability perspective, to sufficiently understand these systems. A representative non-raised floor data center system was numerically modeled and the data generated from a parametric study was analyzed. The model constitutes a half symmetry section of a 40 rack data center that is arranged in a cold aisle-hot aisle fashion. The effect of several input variables, namely, rack heat load, rack flow rate, rack temperature rise, diffuser flow rate, diffuser location, diffuser height, diffuser pitch, ceiling height, hot exhaust air return vent location, and non-uniformity in rack heat load, was studied. Temperature data was collected at several locations at the inlet to the racks. Statistical analysis was carried out to describe trends in the data.


Author(s):  
Veerendra Mulay ◽  
Saket Karajgikar ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Madhusudan Iyengar

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in these situations. A parametric study of such solution is presented in this paper. A representative data center with 40 racks is modeled using commercially available CFD code. The variation in rack inlet temperature due to tile openings, underfloor plenum depths is reported.


Author(s):  
Shrishail Guggari ◽  
Dereje Agonafer ◽  
Christian Belady ◽  
Lennart Stahl

Today’s data centers are designed for handling heat densities of 1000W/m2 at the room level. Trends indicate that these heat densities will exceed 3000W/m2 in the near future. As a result, cooling of data centers has emerged as an area of increasing importance in electronics thermal management. With these high heat loads, data center layout and design cannot rely on intuitive design of air distribution and requires analytical tools to provide the necessary insight to the problem. These tools can also be used to optimize the layout of the room to improve energy efficiency in the data center. In this paper, first an under floor analysis is done to find an optimized layout based on flow distribution through perforated tiles, then a complete Computational Fluid Dynamics (CFD) model of the data center facility is done to check for desired cooling and air flow distribution throughout the room. A robust methodology is proposed which helps for fast, easy, efficient modeling and analysis of data center design. Results are displayed to provide some guidance on the layout and design of data center. The resulting design approach is very simple and well suited for the energy efficient design of complex data centers and server farms.


Sign in / Sign up

Export Citation Format

Share Document