Measured-Temperature Interpolation and Visualization for Data Centers

Author(s):  
Xuanhang (Simon) Zhang ◽  
Christopher M. Healey ◽  
Zachary R. Sheffer ◽  
James W. VanGilder

The growing demand for data center facilities has made intelligently managed data center operations necessary. For temperature measurement and thermal management, a common practice is to install a limited number of temperature sensors evenly distributed throughout the room. However, data center operators rarely fully equip facilities with temperature sensors due to their cost, complexity, and maintenance requirements, creating vacancies in the data center temperature and cooling picture. The local nature of sensor data can also be misinterpreted and misused. Without novel methods to interpret and visualize temperatures obtained by prediction or measurement, data center operators cannot easily identify urgent local cooling issues or quickly examine the temperature at other location. This paper presents methods to predict a full three-dimensional temperature field in data centers from a limited number of measurement points. Several different statistical interpolating schemes are discussed. We also validate the interpolated temperature fields against benchmark data from Computation Fluid Dynamics (CFD) and show good agreement.

2012 ◽  
Vol 134 (4) ◽  
Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Hendrik Hamann ◽  
Madhusudan K. Iyengar ◽  
Steven Kamalsy ◽  
...  

In this paper, an effective and computationally efficient proper orthogonal decomposition (POD) based reduced order modeling approach is presented, which utilizes selected sets of observed thermal sensor data inside the data centers to help predict the data center temperature field as a function of the air flow rates of computer room air conditioning (CRAC) units. The approach is demonstrated through application to an operational data center of 102.2 m2 (1100 square feet) with a hot and cold aisle arrangement of racks cooled by one CRAC unit. While the thermal data throughout the facility can be collected in about 30 min using a 3D temperature mapping tool, the POD method is able to generate temperature field throughout the data center in less than 2 s on a high end desktop personal computer (PC). Comparing the obtained POD temperature fields with the experimentally measured data for two different values of CRAC flow rates shows that the method can predict the temperature field with the average error of 0.68 °C or 3.2%. The maximum local error is around 8 °C, but the total number of points where the local error is larger than 1 °C, is only ∼6% of the total domain points.


Author(s):  
Emad Samadiani ◽  
Yogendra Joshi ◽  
Hendrik Hamann ◽  
Madhusudan K. Iyengar ◽  
Steven Kamalsy ◽  
...  

In this paper, an effective and computationally efficient Proper Orthogonal Decomposition (POD) based reduced order modeling approach is presented, which utilizes selected sets of observed thermal sensor data inside the data centers to help predict the data center temperature field as a function of the air flow rates of Computer Room Air Conditioning (CRAC) units. The approach is demonstrated through application to an operational data center of 102.2 m2 (1,100 square feet) with a hot and cold aisle arrangement of racks cooled by one CRAC unit. While the thermal data throughout the facility can be collected in about 30 minutes using a 3D temperature mapping tool, the POD method is able to generate temperature field throughout the data center in less than 2 seconds on a high end desktop PC. Comparing the obtained POD temperature fields with the experimentally measured data for two different values of CRAC flow rates shows that the method can predict the temperature field with the average error of 0.68 °C or 3.2%.


Author(s):  
Michael M. Toulouse ◽  
David Lettieri ◽  
Van P. Carey ◽  
Cullen E. Bash

This paper summarizes the comparison of predictions by a compact model of air flow and transport in data centers to temperature measurements of an operational data center. The simplified model and code package, referred to as COMPACT (Compact Model of Potential Flow and Convective Transport), is intended as an alternative to the use of time-intensive full CFD thermofluidic models as a first-order design tool, as well as a potential improvement to plant-based controllers. COMPACT is based on potential flow and combined with an application of convective energy equations, using sparse matrix solvers to seek flow and temperature solutions. Full-room solutions can be generated in 15 seconds on a commercially available laptop, and an accompanying graphical user interface has also been developed to allow quick configuration of data center designs and analysis of flow and temperature results. Experiments for validation of the model were conducted at the HP Labs data center in Palo Alto, CA, which is in a traditional configuration consisting of inlet floor tiles feeding cold air between two rows of multiple server racks. Subsequently, air exits either through ceiling tiles or direct room-return to CRAC units located on the side of the room. Temperatures were recorded at multiple points along entering and exiting flow faces within the room, as well as at various points in cold and hot aisles, and are presented and compared to model predictions to assess their accuracy. Areas of greater and lesser accuracy are analyzed and presented, in addition to conclusions as to the strengths and weaknesses of the model. For some cases, the average predicted temperature along in-flowing rack faces was within one degree of the average measured temperature. However, the differences in temperature are not evenly distributed. The most pronounced variations between the model and room measurements were located in areas above server racks where recirculation was shown to most likely occur. In these areas, the predicted temperature was higher than experimental values; this can likely be attributed to the absence of buoyancy effects in the simplified potential flow model. Adaptations of the model and its configuration standards for more accurate temperature distributions are proposed, as well as investigations into the effect on temperature comparisons to idealized model output by unaccounted heat sources or flow phenomena.


Author(s):  
Coskun Islam ◽  
Ismail Lazoglu ◽  
Yusuf Altintas

This article presents an enhanced mathematical model for transient thermal analysis in machining processes. The proposed mathematical model is able to simulate transient tool, workpiece, and chip temperature fields as a function of time for interrupted processes with time varying chip loads such as milling and continuous machining processes such as turning and drilling. A finite difference technique with implicit time discretization is used for the solution of partial differential equations to simulate the temperature fields on the tool, workpiece, and chip. The model validations are performed with the experimental temperature measurement data available in the literature for the interrupted turning of Ti6Al6V–2Sn, Al2024, gray cast iron and for the milling of Ti6Al4V. The simulation results and experimental measurements agree well. With the newly introduced modeling approach, it is demonstrated that time-dependent dynamic variations of the temperature fields are predicted with maximum 12% difference in the validated cases by the proposed transient thermal model.


Author(s):  
Long Phan ◽  
Cheng-Xian Lin ◽  
Mackenson Telusma

Energy consumption and thermal management have become key challenges in the design of large-scale data centers, where perforated tiles are used together with cold and hot aisles configuration to improve thermal management. Although full-field simulations using computational fluid dynamics and heat transfer (CFD/HT) tools can be applied to predict the flow and temperature fields inside data centers, their running time remains the biggest challenge to most modelers. In this paper, response surface methodology based on radial basis function is used to drastically reduce the running time while preserving the accuracy of the model. Response surface method with data interpolation allows the study of many design parameters of data center model more feasible and economical in terms of modeling time. Three scenarios of response surface construction are investigated (5%, 10%, and 20%). The method shows very good agreement with the simulation results obtained from CFD/HT model as in the case of 20% of the original CFD data points used for response surface training. Error analysis is carried out to quantify the error associated with each scenario. Case 20% shows superb accuracy as compared to others. With only 2.12 × 104 in mean relative error and R2 = 0.970, the case can capture most of the aspects of the original CFD model.


Author(s):  
Monem H. Beitelmal ◽  
Zhikui Wang ◽  
Carlos Felix ◽  
Cullen Bash ◽  
Christopher Hoover ◽  
...  

Local airflow distribution in data center environments has historically been accomplished through ventilation tiles distributed over a raised floor air distribution plenum. The tiles are initially configured upon the commissioning of the facility and, as IT equipment configuration changes with time, the tiles are adjusted accordingly. However, tile adjustment is a manual process that is error-prone and often non-intuitive. Tile flow rates are a strong function of under floor plenum pressure distribution which is subject to change as tile layouts are reconfigured. Thermal models are often developed to assist with layout changes, but these models can be time-consuming to generate and require skilled users to achieve accurate results. This paper presents an adaptive vent tile (AVT) for use in raised floor data centers that can adapt to the needs of nearby IT equipment. We present a multi-input-multi-output (MIMO) AVT controller that automatically and dynamically adjusts a multiplicity of AVT openings in coordination such that thermal management requirements are met with minimum use of airflow. We describe the development of dynamic models and algorithm design of the MIMO controller. The controller was evaluated with a set of AVT units in a production data center environment. Results show that the controller can optimize local airflow distribution, provide fine-grained rack intake temperature control and respond to disturbances in a manner that is not achievable through static distribution of tiles.


2020 ◽  
Author(s):  
Adellina Sylvira Azis ◽  
M.Alfarisi Farabbi ◽  
Dian Kristianto Tatarang ◽  
Aziiz firmansyach

The statistic is a method developed for analyzing, analyzing, and compiling sample data to get the right data. Also, observation is needed to get accurate and concrete data. Various kinds of methods can be used to obtain the data, one of which is the Symptom Symptoms Data Center is the symptom data which is divided into two, namely the symptom center symptom data grouped and the data center symptom grouped. This journal will explain in detail the size of Symptoms in unclassified data centers Symptom Measurement of Unclassified Data Centers or also Symptom Size Single grouped data centers are data that are not arranged in a frequency distribution, so there are no category intervals and category midpoints. Symptom measurement data centers have not been grouped namely the calculated average (mean), measuring / geometric mean, harmonic average, tertiary average, median, mode, and fractile (quartile, decile, percentile). Measurement can use Microsoft Excel and SPSS applications


Author(s):  
Huijing Jiang ◽  
Xinwei Deng ◽  
Vanessa Lopez ◽  
Hendrik Hamann

Energy consumption of data center has increased dramatically due to the massive computing demands driven from every sector of the economy. Hence, data center energy management has become very important for operating data centers within environmental standards while achieving low energy cost. In order to advance the understanding of thermal management in data centers, relevant environmental information such as temperature, humidity and air quality are gathered through a network of real-time sensors or simulated via sophisticated physical models (e.g. computational fluid dynamics models). However, sensor readings of environmental parameters are collected only at sparse locations and thus cannot provide a detailed map of temperature distribution for the entire data center. While the physics models yield high resolution temperature maps, it is often not feasible, due to computational complexity of these models, to run them in real-time, which is ideally required for optimum data center operation and management. In this work, we propose a novel statistical modeling approach to updating physical model outputs in real-time and providing automatic scheduling for re-computing physical model outputs. The proposed method dynamically corrects the discrepancy between a steady-state output of the physical model and real-time thermal sensor data. We show that the proposed method can provide valuable information for data center energy management such as real-time high-resolution thermal maps. Moreover, it can efficiently detect systematic changes in a data center thermal environment, and automatically schedule physical models to be re-executed whenever significant changes are detected.


Author(s):  
Hongfei Li ◽  
Hendrik F. Hamann

Although in most buildings the spatial allocation of cooling resources can be managed using multiple air handling units and an air ducting system, it can be challenging for an operator to leverage this capability, partially because of the complex interdependencies between the different control options. This is in particular important for data centers, where cooling is a major cost while the sufficient allocation of cooling resources has to ensure the reliable operation of mission-critical information processing equipment. It has been shown that thermal zones can provide valuable decision support for optimizing cooling. Such Thermal zones are generally defined as the region of influence of a particular cooling unit or cooling “source” (such as an air condition unit (ACU)). In this paper we show results using a statistical approach, where we leverage real-time sensor data to obtain thermal zones in realtime. Specifically, we model the correlations between temperatures observed from sensors located at the discharge of an ACU and the other sensors located in the room. Outputs from the statistical solution can be used to optimize the placement of equipment in a data center, investigate failure scenarios, and make sure that a proper cooling solution has been achieved.


2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Ethan Cruz ◽  
Yogendra Joshi

Both localized power densities and overall power consumption within the data center continue to rise, following the same upward trend as the information technology (IT) equipment stored within the data center. Air cooling this increasing power has proved a significant challenge at both the IT equipment and data center level. In order to combat this challenge, computational fluid dynamics and heat transfer (CFD/HT) models have been employed as the dominant technique for the design and optimization of both new and existing data centers. This study is a continuation of earlier comparisons of CFD/HT models to experimentally measured temperature and flow fields in a small data center test cell. It compares an inviscid model, a laminar flow model, and three turbulence models to six sets of experimentally collected data. The six sets of data are from two different IT equipment rack power dissipations using three different layouts of perforated tiles. Insight into the location of the deviation between the different CFD/HT models and experimental data are discussed, along with the computational effort involved in running the models. A new grid analysis was performed on the different CFD/HT models in order to try to minimize computational effort. The inviscid model was able to run with a smaller grid size than the viscous models and even for the same size grid was found to run 30% faster than the fastest viscous model. Due to both the reduced grid size and computational effort (due to the simpler equation set), the inviscid model ran over thirty times faster than the next fastest model. The fact that the inviscid model ran the fastest is not surprising, however what was not expected is that the inviscid model was also found to have the smallest deviations from the experimental data for all six of the cases. This is most likely due to the arrangement of the data center test cell with the relatively few high velocity air jets and large open space around the IT equipment. More tightly packed data centers with higher air velocities and turbulent mixing conditions will certainly produce different results than those found in this study.


Sign in / Sign up

Export Citation Format

Share Document