Rack-Level Study of Hybrid Cooled Servers Using Warm Water Cooling With Variable Pumping for Centralized Coolant System

Author(s):  
Manasa Sahini ◽  
Chinmay Kshirsagar ◽  
Patrick McGinn ◽  
Dereje Agonafer

As global demand for data centers grows, so does the size and load placed on data centers, leading constraints on power and space available to the operator. Cooling power consumption is a major part of the data center energy usage. Liquid cooling technology has emerged as a viable solution in the process of optimization of the energy consumed per performance unit. In this data center rack level evaluation, 2OU (Open U) hybrid (liquid+air) cooled web servers are tested to observe the effects of warm water cooling on the server component temperatures, IT power and cooling power. The study discusses the importance of variable speed pumping in a centralized coolant configuration system. The cooling setup includes a mini rack capable of housing up to eleven hybrid cooled web servers and two heat exchangers that exhaust the heat dissipated from the servers to the environment (the test rig data center room). The centralized configuration has two redundant pumps placed in series with heat exchanger at the rack. Each server is equipped with two passive (i.e. no active pump) cold plates for cooling the CPUs while rests of the components are air cooled. Synthetic stress load has been generated on each server using stress-testing tools. Pumps in the servers are separately powered using an external power supply. The pump speed is proportional to the voltage across the armature [1]. The pump rpm has been recorded with input voltages ranging from 11V to 17V. The servers are tested for higher inlet temperatures ranging from 25°C to 45°C which falls within the ASHRAE liquid cooled envelope W4 [2]. Variable pumping has been achieved by using different input voltages at respective inlet temperatures.

Author(s):  
Uschas Chowdhury ◽  
Manasa Sahini ◽  
Ashwin Siddarth ◽  
Dereje Agonafer ◽  
Steve Branton

Modern day data centers are operated at high power for increased power density, maintenance, and cooling which covers almost 2 percent (70 billion kilowatt-hours) of the total energy consumption in the US. IT components and cooling system occupy the major portion of this energy consumption. Although data centers are designed to perform efficiently, cooling the high-density components is still a challenge. So, alternative methods to improve the cooling efficiency has become the drive to reduce the cooling cost. As liquid cooling is more efficient for high specific heat capacity, density, and thermal conductivity, hybrid cooling can offer the advantage of liquid cooling of high heat generating components in the traditional air-cooled servers. In this experiment, a 1U server is equipped with cold plate to cool the CPUs while the rest of the components are cooled by fans. In this study, predictive fan and pump failure analysis are performed which also helps to explore the options for redundancy and to reduce the cooling cost by improving cooling efficiency. Redundancy requires the knowledge of planned and unplanned system failures. As the main heat generating components are cooled by liquid, warm water cooling can be employed to observe the effects of raised inlet conditions in a hybrid cooled server with failure scenarios. The ASHRAE guidance class W4 for liquid cooling is chosen for our experiment to operate in a range from 25°C – 45°C. The experiments are conducted separately for the pump and fan failure scenarios. Computational load of idle, 10%, 30%, 50%, 70% and 98% are applied while powering only one pump and the miniature dry cooler fans are controlled externally to maintain constant inlet temperature of the coolant. As the rest of components such as DIMMs & PCH are cooled by air, maximum utilization for memory is applied while reducing the number fans in each case for fan failure scenario. The components temperatures and power consumption are recorded in each case for performance analysis.


Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation IT performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true lifecycle cost of a water cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership for water cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers and direct water cooling via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influence the total cost of ownership of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


Author(s):  
H. E. Khalifa ◽  
D. W. Demetriou

The work presented in this paper describes a simplified thermodynamic model that can be used for exploring optimization possibilities in air-cooled data centers. The model has been used to identify optimal, energy-efficient designs, operating scenarios, and operating parameters such as flow rates and air supply temperatures. The results of this analysis highlight the important features that need to be considered when optimizing the operation of air-cooled data centers, especially the trade-off between low air supply temperature and increased air flow rate. The model was shown to be especially valuable in defining the optimal operating strategies for enclosed aisle configurations with fixed and variable server flows, and to elucidate the deleterious effect of temperature nonuniformity at the inlet of the racks on the data center cooling infrastructure power consumption. The analysis shows a potential for as much as an ∼58% savings in cooling infrastructure energy consumption by utilizing an optimized enclosed aisle configuration with bypass recirculation, instead of a traditional enclosed aisle, where all the data center exhaust is forced to flow through the computer room air conditioners. The analysis of open-aisle data centers shows that as the temperature at the inlet of the racks becomes more nonuniform, optimal operation tends toward lower recirculation and higher power consumption; again, stressing the importance of providing as uniform a temperature to the racks as possible. It is also revealed that servers with a modest temperature rise (∼10°C) have a wider latitude for cooling infrastructure optimization than those with a high temperature rise (≥20°C), which tend to consume less cooling power when the aisles are enclosed.


Author(s):  
Rongliang Zhou ◽  
Cullen Bash ◽  
Zhikui Wang ◽  
Alan McReynolds ◽  
Thomas Christian ◽  
...  

Data centers are large computing facilities that can house tens of thousands of computer servers, storage and networking devices. They can consume megawatts of power and, as a result, reject megawatts of heat. For more than a decade, researchers have been investigating methods to improve the efficiency by which these facilities are cooled. One of the key challenges to maintain highly efficient cooling is to provide on demand cooling resources to each server rack, which may vary with time and rack location within the larger data center. In common practice today, chilled water or refrigerant cooled computer room air conditioning (CRAC) units are used to reject the waste heat outside the data center, and they also work together with the fans in the IT equipment to circulate air within the data center for heat transport. In a raised floor data center, the cool air exiting the multiple CRAC units enters the underfloor plenum before it is distributed through the vent tiles in the cold aisles to the IT equipment. The vent tiles usually have fixed openings and are not adapted to accommodate the flow demand that can vary from cold aisle to cold aisle or rack to rack. In this configuration, CRAC units have the extra responsibilities of cooling resources distribution as well as provisioning. The CRAC unit, however, does not have the fine control granularity to adjust air delivery to individual racks since it normally affects a larger thermal zone, which consists of a multiplicity of racks arranged into rows. To better match cool air demand on a per cold aisle or rack basis, floor-mounted adaptive vent tiles (AVT) can be used to replace CRAC units for air delivery adjustment. In this arrangement, each adaptive vent tile can be remotely commanded from fully open to fully close for finer local air flow regulation. The optimal configuration for a multitude of AVTs in a data center, however, can be far from intuitive because of the air flow complexity. To unleash the full potential of the AVTs for improved air flow distribution and hence higher cooling efficiency, we propose a two-step approach that involves both steady-state and dynamic optimization to optimize the cooling resource provisioning and distribution within raised-floor air cooled data centers with rigid or partial containment. We first perform a model-based steady-state optimization to optimize whole data center air flow distribution. Within each cold aisle, all AVTs are configured to a uniform opening setting, although AVT opening may vary from cold aisle to cold aisle. We then use decentralized dynamic controllers to optimize the settings of each CRAC unit such that the IT equipment thermal requirement is satisfied with the least cooling power. This two-step optimization approach simplifies the large scale dynamic control problem, and its effectiveness in cooling efficiency improvement is demonstrated through experiments in a research data center.


Author(s):  
Jason A. Matteson ◽  
Aparna Vallury ◽  
Billy Medlin

Today’s server designs continue to package more electronics consuming higher levels of power in smaller and smaller spaces which increases the demand on the cooling subsystem within a server. These trends continue to drive the total cooling power and airflow demands up, resulting in increased acoustics. Due to the imposed high cooling requirements, data centers are frequently limited by the amount of flow that the raised floor environments are capable of providing. Server cooling demands are outpacing the data center airflow capabilities, resulting in energy inefficient scenarios as well as increasingly high acoustics levels. Given these concerns, there is a dire need for new thermal management techniques. This paper describes a new technique that offers more advanced yet simplified user controls, which provide the end user the ability to minimize acoustic signature and cooling energy spent, while maximizing the server performance.


Author(s):  
Isaac Rose ◽  
Aaron P. Wemhoff ◽  
Amy S. Fleischer

Abstract Data centers consume an extraordinary amount of electricity, and the rate of consumption is increasing at a rapid pace. Thus, energy efficiency in data center design is of substantial interest since it can have a significant impact on operating costs. The server cooling infrastructure is one area which is ripe for design innovation. Various designs have been considered for air-cooled data centers, and there is growing interest in liquid-cooled server designs. One potential liquid-cooled solution, which reduces the cost of cooling to less than 5% of the information technology (IT) energy use, is a chiller-less or warm water-cooled system, which removes the chiller from the design and lets the cooling water supply vary with changes in the outdoor ambient conditions. While this design has been proven to work effectively in some locations, environmental extremes prevent its more widespread implementation. In this paper, the design and analysis of a cold water storage system are shown to extend the applicability of chiller-less designs to a wider variety of environmental conditions. This can lead to both energy and economic savings for a wide variety of data center installations. A numerical model of a water storage system is developed, validated, and used to analyze the impact of a water storage tank system in a chiller-less data center design featuring outdoor wet cooling. The results show that during times of high wet bulb operating conditions, a water storage tank can be an effective method to significantly reduce chip operating temperatures for warm water-cooled systems by reducing operating temperatures 5–7 °C during the hottest part of the day. The overall system performance was evaluated using both an exergy analysis and a modified power usage effectiveness (PUE) metric defined for the water storage system. This unique situation also necessitates the development of a new exergy definition in order to properly capture the physics of the situation. The impacts of tank size, tank aspect ratio, fill percentage, and charging/discharging time on both the chip temperature and modified PUE are evaluated. It is determined that tank charging time must be carefully matched to environmental conditions in order to optimize impact. Interestingly, the water being stored is initially above ambient, but the overall system performance improves with lower water temperatures. Therefore, heat losses to ambient are found to beneficial to the overall system performance. The results of this analysis demonstrate that in application, data center operators will see a clear performance benefit if water storage systems are used in conjunction with warm water cooling. This application can be extended to data center failure scenarios and could also lead to downsizing of equipment and a clear economic benefit.


2016 ◽  
Vol 138 (1) ◽  
Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation information technology (IT) performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance, and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true life cycle cost of a water-cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership (TCO) for water-cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers, and direct water cooling (DWC) via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influences the TCO of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


Author(s):  
Veerendra Mulay ◽  
Dereje Agonafer ◽  
Roger Schmidt

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. Thermal management of such dense data center clusters using liquid cooling is presented.


Sign in / Sign up

Export Citation Format

Share Document