Liquid Cooling of Compute System

2019 ◽  
Vol 141 (1) ◽  
Author(s):  
Jessica Gullbrand ◽  
Mark J. Luckeroth ◽  
Mark E. Sprenger ◽  
Casey Winkel

The continued demand for increasing compute performance results in an increasing system power and power density of many computers. The increased power requires more efficient cooling solutions than traditionally used air cooling. Therefore, liquid cooling, which has traditionally been used for large data center deployments, is becoming more mainstream. Liquid cooling can be used selectively to cool the high power components or the whole compute system. In this paper, the example of a fully liquid cooled server is used to describe different ingredients needed, together with the design challenges associated with them. The liquid cooling ingredients are cooling distribution unit (CDU), fluid, manifold, quick disconnects (QDs), and cold plates. Intel is driving an initiative to accelerate liquid cooling implementation and deployment by enabling the ingredients above. The functionality of these ingredients is discussed in this paper, while cold plates are discussed in detail.

Author(s):  
Devdatta P. Kulkarni ◽  
Priyanka Tunuguntla ◽  
Guixiang Tan ◽  
Casey Carte

Abstract In recent years, rapid growth is seen in computer and server processors in terms of thermal design power (TDP) envelope. This is mainly due to increase in processor core count, increase in package thermal resistance, challenges in multi-chip integration and maintaining generational performance CAGR. At the same time, several other platform level components such as PCIe cards, graphics cards, SSDs and high power DIMMs are being added in the same chassis which increases the server level power density. To mitigate cooling challenges of high TDP processors, mainly two cooling technologies are deployed: Liquid cooling and advanced air cooling. To deploy liquid cooling technology for servers in data centers, huge initial capital investment is needed. Hence advanced air-cooling thermal solutions are being sought that can be used to cool higher TDP processors as well as high power non-CPU components using same server level airflow boundary conditions. Current air-cooling solutions like heat pipe heat sinks, vapor chamber heat sinks are limited by the heat transfer area, heat carrying capacity and would need significantly more area to cool higher TDP than they could handle. Passive two-phase thermosiphon (gravity dependent) heat sinks may provide intermediate level cooling between traditional air-cooled heat pipe heat sinks and liquid cooling with higher reliability, lower weight and lower cost of maintenance. This paper illustrates the experimental results of a 2U thermosiphon heat sink used in Intel reference 2U, 2 node system and compare thermal performance using traditional heat sinks solutions. The objective of this study was to showcase the increased cooling capability of the CPU by at least 20% over traditional heat sinks while maintaining cooling capability of high-power non-CPU components such as Intel’s DIMMs. This paper will also describe the methodology that will be used for DIMMs serviceability without removing CPU thermal solution, which is critical requirement from data center use perspective.


2016 ◽  
Vol 858 ◽  
pp. 970-973 ◽  
Author(s):  
Vipindas Pala ◽  
Edward van Brunt ◽  
Brett A. Hull ◽  
Scott Allen ◽  
John W. Palmour

Due to their low switching energies, knee-less forward characteristics, and a robust, low reverse recovery body diode, SiC MOSFETs are ideal candidates to replace silicon IGBTs in many high-power medium-voltage topologies. This paper demonstrates how SiC MOSFETs can be effectively combined in series and parallel to maximize the system power density and performance.


2009 ◽  
Vol 38 (7) ◽  
pp. 1375-1381 ◽  
Author(s):  
D. T. Crane ◽  
J. W. LAGrandeur ◽  
F. Harris ◽  
L. E. Bell

Author(s):  
Jeffrey D. Rambo ◽  
Yogendra K. Joshi

Data center facilities, which house thousands of servers, storage devices and computing hardware, arranged in 2 meter high racks are providing many thermal challenges. Each rack can dissipate 10–15 kW, and with facilities as large as tens of thousands of square feet, the net power dissipated is typically on the order of several MW. The cost to power these facilities alone can be millions of dollars a year, with the cost to provide adequate cooling not far behind. Significant savings can be realized for the end user by improved design methodology of these high power density data centers. The fundamental need for improved characterization is motivated by inadequacies of simple energy balances to identify local ‘hot spots’ and ultimately provide a reliable modeling framework by which the data centers of the future can be designed. Recent attempts in computational fluid dynamics (CFD) modeling of data centers have been based around a simple rack model, either as a uniform heat generator or specified temperature rise across the rack. This desensitizes the solution to variations of heat load and corresponding flow rate needed to cool the servers throughout the rack. Heat generated at the smaller scales (the chip level) produces changes in the larger length scales of the data center. Accurate simulations of these facilities should attempt to resolve the range of length scales present. In this paper, a multi-scale model where each rack is subdivided into a series of sub-models to better mimic the behavior of individual servers inside the data center is proposed. A Reynolds-averaged Navier-Stokes CFD model of a 110 m2 (1,200 ft2) representative data center with the raised floor cooling scheme was constructed around this multi-scale rack model. Each of the 28 racks dissipated 4.23 kW, giving the data center a power density of 1076 W/m2 (100 W/ft2) based on total floor space. Parametric studies of varying heat loads within the rack and throughout the data center were performed to better characterize the interactions of the sub-rack scale heat generation and the data center. Major results include 1) the presence of a nonlinear thermal response in the upper portion of each rack due to recirculation effects and 2) significant changes in the surrounding racks (up to 10% increase in maximum temperature) observed in response to changes in rack flow rate (50% decrease).


Author(s):  
Muhamad Faizal Yaakub ◽  
Mohd Amran Mohd Radzi ◽  
Faridah Hanim Mohd Noh ◽  
Maaspaliza Azri

Silicon (Si) based power devices have been employed in most high power applications since decades ago. However, nowadays, most major applications demand higher efficiency and power density due to various reasons. The previously well-known Si devices, unfortunately, have reached their performance limitation to cover all those requirements. Therefore, Silicon Carbide (SiC) with its unique and astonishing characteristic has gained huge attention, particularly in the power electronics field. Comparing both, SiC presents a remarkable ability to enhance overall system performance and the transition from Si to SiC is crucial. With regard to its importance, this paper provides an overview of the characteristics, advantages, and outstanding capabilities in various application for SiC devices. Furthermore, it is also important to disclose the system design challenges, which are discussed at the end of the paper.


Sign in / Sign up

Export Citation Format

Share Document