scholarly journals Minimizing Thermal Stress for Data Center Servers through Thermal-Aware Relocation

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Muhammad Tayyab Chaudhry ◽  
T. C. Ling ◽  
S. A. Hussain ◽  
Atif Manzoor

A rise in inlet air temperature may lower the rate of heat dissipation from air cooled computing servers. This introduces a thermal stress to these servers. As a result, the poorly cooled active servers will start conducting heat to the neighboring servers and giving rise to hotspot regions of thermal stress, inside the data center. As a result, the physical hardware of these servers may fail, thus causing performance loss, monetary loss, and higher energy consumption for cooling mechanism. In order to minimize these situations, this paper performs the profiling of inlet temperature sensitivity (ITS) and defines the optimum location for each server to minimize the chances of creating a thermal hotspot and thermal stress. Based upon novel ITS analysis, a thermal state monitoring and server relocation algorithm for data centers is being proposed. The contribution of this paper is bringing the peak outlet temperatures of the relocated servers closer to average outlet temperature by over 5 times, lowering the average peak outlet temperature by 3.5% and minimizing the thermal stress.

Author(s):  
Uschas Chowdhury ◽  
Walter Hendrix ◽  
Thomas Craft ◽  
Willis James ◽  
Ankit Sutaria ◽  
...  

Abstract In a data center, electronic equipment such as server and switches dissipate heat and the corresponding cooling systems contribute to typically 25–35% of total energy consumption. The heat load continues to increase as there is a greater need for miniaturization and convergence. In 2014, data centers in the U.S. consumed an estimated 70 billion kWh, representing about 1.8% of total U.S. electricity consumption. Based on current trend estimates, U.S. data centers are projected to consume approximately 73 billion kWh in 2020 [1]. Many research and strategies are adopted to minimize energy cost. The recommended dry bulb temperature for long-term operation and reliability for air cooling is between 18–27°C and the largest allowable inlet temperature range to operate at is between 5°C and 45°C with American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) enabling much broader allowable zones) [2]. But understanding a proper cooling system is very important especially for thermal management of IT equipment with high heat loads such as 1U or 2U multi-core, high-end servers and blade servers which provide more computing per watt. Many problems like high inlet temperature due to the mixing of hot air with cold air, local hot spots, lower system reliability, increased failure, and downtime may occur. Among many other approaches to managing high-density racks, in-row coolers are used in between racks to provide cold air and minimize local hot spots. This paper describes a computational study being performed by applying in-row coolers for different rack power configuration with and without aisle containment. The power, as well as the number of racks, are varied to study the effect of raised inlet temperature for the IT equipment in a Computational Fluid Dynamics (CFD) model developed in 6SigmaRoom with the help of built-in library items. A comparative analysis is also performed for a typical small-sized non-raised facility to investigate the efficacy and limitations of in-row coolers in thermal management of IT equipment with variation in rack heat load and containment. Several other aspects like a parametric study of variable opening areas of duct between racks and in-row coolers, the variation of operating flow rate and failure scenarios are also studied to find proper flow distribution, uniformity of outlet temperature and predict better performance, energy savings and reliability. The results are presented for general guidance for flexible and quick installation and safe operation of in-row coolers to improve thermal efficiency.


Author(s):  
Ankit Somani ◽  
Yogendra K. Joshi

Early Data centers can consume 25 to 50 times more electric power than a standard office space of the same footprint. In this paper, a simplified computational fluid dynamics/heat transfer (CFD/HT) model for a unit cell of a data center with a hot aisle-cold aisle (HACA) layout is simulated. Inefficiencies dealing with the mixing of hot air present in the room with the cold inlet air, leading to a loss of cooling potential are identified. The need for a thermal aware job-scheduling algorithm which enhances IT productivity, while maintaining the facility within server inlet temperature constraints is established. The inherent non-linearity of such an optimization problem is explained. A novel algorithm called the Ambient Intelligence based Load Management (AILM) is developed which counters the above issues and enhances the net data center heat dissipation capacity for given energy consumption at the facilities end. It gives a scheme to determine how much and where the computer loads should be allocated, based on the differential loss in cooling potential per unit increase in server workload. Enhancements of heat dissipation capacity of over 50% are proved numerically for the representative values considered. An approach to incorporate heterogeneity in data centers, both for lower heat dissipation and liquid cooled racks has been established. Finally, different objective functions are studied and an ideal combination of the IT objectives and thermal constraints is derived.


Author(s):  
Amip J. Shah ◽  
Van P. Carey ◽  
Cullen E. Bash ◽  
Chandrakant D. Patel

As heat dissipation in data centers rises by orders of magnitude, inefficiencies such as recirculation will have an increasingly significant impact on the thermal manageability and energy efficiency of the cooling infrastructure. For example, prior work has shown that for simple data centers with a single Computer Room Air-Conditioning (CRAC) unit, an operating strategy that fails to account for inefficiencies in the air space can result in suboptimal performance. To enable system-wide optimality, an exergy-based approach to CRAC control has previously been proposed. However, application of such a strategy in a real data center environment is limited by the assumptions inherent to the single-CRAC derivation. This paper addresses these assumptions by modifying the exergy-based approach to account for the additional interactions encountered in a multi-component environment. It is shown that the modified formulation provides the framework necessary to evaluate performance of multi-component data center thermal management systems under widely different operating circumstances.


Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation IT performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true lifecycle cost of a water cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership for water cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers and direct water cooling via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influence the total cost of ownership of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Russell Tipton ◽  
Bruce Murray ◽  
Bahgat G. Sammakia ◽  
...  

The heat dissipated by high performance IT equipment such as servers and switches in data centers is increasing rapidly, which makes the thermal management even more challenging. IT equipment is typically designed to operate at a rack inlet air temperature ranging between 10 °C and 35 °C. The newest published environmental standards for operating IT equipment proposed by ASHARE specify a long term recommended dry bulb IT air inlet temperature range as 18°C to 27°C. In terms of the short term specification, the largest allowable inlet temperature range to operate at is between 5°C and 45°C. Failure in maintaining these specifications will lead to significantly detrimental impacts to the performance and reliability of these electronic devices. Thus, understanding the cooling system is of paramount importance for the design and operation of data centers. In this paper, a hybrid cooling system is numerically modeled and investigated. The numerical modeling is conducted using a commercial computational fluid dynamics (CFD) code. The hybrid cooling strategy is specified by mounting the in row cooling units between the server racks to assist the raised floor air cooling. The effect of several input variables, including rack heat load and heat density, rack air flow rate, in row cooling unit operating cooling fluid flow rate and temperature, in row coil effectiveness, centralized cooling unit supply air flow rate, non-uniformity in rack heat load, and raised floor height are studied parametrically. Their detailed effects on the rack inlet air temperatures and the in row cooler performance are presented. The modeling results and corresponding analyses are used to develop general installation and operation guidance for the in row cooler strategy of a data center.


Author(s):  
Veerendra Mulay ◽  
Saket Karajgikar ◽  
Dereje Agonafer ◽  
Roger Schmidt ◽  
Madhusudan Iyengar

The power trend for Server systems continues to grow thereby making thermal management of Data centers a very challenging task. Although various configurations exist, the raised floor plenum with Computer Room Air Conditioners (CRACs) providing cold air is a popular operating strategy. The air cooling of data center however, may not address the situation where more energy is expended in cooling infrastructure than the thermal load of data center. Revised power trend projections by ASHRAE TC 9.9 predict heat load as high as 5000W per square feet of compute servers’ equipment footprint by year 2010. These trend charts also indicate that heat load per product footprint has doubled for storage servers during 2000–2004. For the same period, heat load per product footprint for compute servers has tripled. Amongst the systems that are currently available and being shipped, many racks exceed 20kW. Such high heat loads have raised concerns over limits of air cooling of data centers similar to air cooling of microprocessors. A hybrid cooling strategy that incorporates liquid cooling along with air cooling can be very efficient in these situations. A parametric study of such solution is presented in this paper. A representative data center with 40 racks is modeled using commercially available CFD code. The variation in rack inlet temperature due to tile openings, underfloor plenum depths is reported.


2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Dustin W. Demetriou ◽  
H. Ezzat Khalifa

This paper expands on the work presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) that investigated practical IT load placement options in open-aisle, air-cooled data centers. The study found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove IT load from the servers with the warmest inlet temperature. By considering the holistic optimization of the data center load placement strategy and the cooling infrastructure optimization, for a range of data center IT utilization levels, this study investigated the effect of ambient temperatures on the data center operation, the consolidation of servers by completely shutting them off, a complementary strategy to those presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) for increasing the IT load beginning with servers that have the coldest inlet temperature and finally the development of load placement rules via either static (i.e., during data center benchmarking) or dynamic (using real-time data from the current thermal environment) allocation. In all of these case studies, by using a holistic optimization of the data center and associated cooling infrastructure, a key finding has been that a significant amount of savings in the cooling infrastructure's power consumption is seen by reducing the CRAH's airflow rate. In many cases, these savings can be larger than providing higher temperature chilled water from the refrigeration units. Therefore, the path to realizing the industry's goal of higher IT equipment inlet temperatures to improve energy efficiency should be through both a reduction in air flow rate and increasing supply air temperatures and not necessarily through only higher CRAH supply air temperatures.


2016 ◽  
Vol 138 (1) ◽  
Author(s):  
Dustin W. Demetriou ◽  
Vinod Kamath ◽  
Howard Mahaney

The generation-to-generation information technology (IT) performance and density demands continue to drive innovation in data center cooling technologies. For many applications, the ability to efficiently deliver cooling via traditional chilled air cooling approaches has become inadequate. Water cooling has been used in data centers for more than 50 years to improve heat dissipation, boost performance, and increase efficiency. While water cooling can undoubtedly have a higher initial capital cost, water cooling can be very cost effective when looking at the true life cycle cost of a water-cooled data center. This study aims at addressing how one should evaluate the true total cost of ownership (TCO) for water-cooled data centers by considering the combined capital and operational cost for both the IT systems and the data center facility. It compares several metrics, including return-on-investment for three cooling technologies: traditional air cooling, rack-level cooling using rear door heat exchangers, and direct water cooling (DWC) via cold plates. The results highlight several important variables, namely, IT power, data center location, site electric utility cost, and construction costs and how each of these influences the TCO of water cooling. The study further looks at implementing water cooling as part of a new data center construction project versus a retrofit or upgrade into an existing data center facility.


2018 ◽  
Vol 29 (4) ◽  
pp. 678-703
Author(s):  
Louma Ahmad Chaddad ◽  
Ali Chehab ◽  
Imad Elhajj ◽  
Ayman Kayssi

Purpose The purpose of this paper is to present an approach to reduce energy consumption in data centers. Subsequently, it reduces electricity bills and carbon dioxide footprints resulting from their use. Design/methodology/approach The authors present a mathematical model of the energy dissipation optimization problem. The authors formulate analytically the server selection problem and the supply air temperature as a non-linear programming, and propose an algorithm to solve it dynamically. Findings A simulation study on SimWare, using real workload traces, shows considerable savings for different data center sizes and utilization rates as compared to three other classic algorithms. The results prove that the proposed algorithm is efficient in handling the energy-performance trade-off, and that the proposed algorithm provides significant energy savings and maintains a relatively homogenous and stable thermal state at the different rack units in the data center. Originality/value The proposed algorithm ensures energy provisioning, performance optimization over existing state-of-the-art heuristics, and on-demand workload allocation.


Energies ◽  
2020 ◽  
Vol 13 (18) ◽  
pp. 4595
Author(s):  
Naoki Futawatari ◽  
Yosuke Udagawa ◽  
Taro Mori ◽  
Hirofumi Hayama

In data centers, heating, ventilation, and air-conditioning (HVAC) consumes 30–40% of total energy consumption. Of that portion, 26% is attributed to fan power, the ventilation efficiency of which should thus be improved. As an alternative method for experimentations, computational fluid dynamics (CFD) is used. In this study, “parameter tuning”—which aims to improve the prediction accuracy of CFD simulation—is implemented by using the method known as “design of experiments”. Moreover, it is attempted to improve the thermal environment by using a CFD model after parameter tuning. As a result of the parameter tuning, the difference between the result of experimental-measurement results and simulation results for average inlet temperature of information-technology equipment (ITE) installed in the ventilation room of a test data center was within 0.2 °C at maximum. After tuning, the CFD model was used to verify the effect of advanced insulation such as raised-floor fixed panels and show the possibility of reducing fan power by 26% while keeping the recirculation ratio constant. Improving heat-insulation performance is a different approach from the conventional approach (namely, segregating cold/hot airflow) to improving ventilation efficiency, and it is a possible solution to deal with excessive heat generated in data centers.


Sign in / Sign up

Export Citation Format

Share Document