Real-Time Data Center Transient Analysis

Author(s):  
Xuanhang Simon Zhang ◽  
James W. VanGilder

A software tool was developed to predict the transient cooling performance of data centers and to explore various alternatives in real-time for data center design and management purposes. Cooling performance can be affected by factors such as room architecture, rack population and layout, connections between cooler fans and UPSs, chilled water pumps and UPSs, the size of the chilled water storage tank, etc. The available transient cooling runtime is mainly dictated by the system stored cooling capacity and the total load in the data center. This paper discusses the transient response of data centers to different design and failure scenarios and details a comprehensive and efficient approach for simulating this performance.

2013 ◽  
Vol 135 (3) ◽  
Author(s):  
Dustin W. Demetriou ◽  
H. Ezzat Khalifa

This paper expands on the work presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) that investigated practical IT load placement options in open-aisle, air-cooled data centers. The study found that a robust approach was to use real-time temperature measurements at the inlet of the racks to remove IT load from the servers with the warmest inlet temperature. By considering the holistic optimization of the data center load placement strategy and the cooling infrastructure optimization, for a range of data center IT utilization levels, this study investigated the effect of ambient temperatures on the data center operation, the consolidation of servers by completely shutting them off, a complementary strategy to those presented by Demetriou and Khalifa (Demetriou and Khalifa, 2013, “Thermally Aware, Energy-Based Load Placement in Open-Aisle, Air-Cooled Data Centers,” ASME J. Electron. Packag., 135(3), p. 030906) for increasing the IT load beginning with servers that have the coldest inlet temperature and finally the development of load placement rules via either static (i.e., during data center benchmarking) or dynamic (using real-time data from the current thermal environment) allocation. In all of these case studies, by using a holistic optimization of the data center and associated cooling infrastructure, a key finding has been that a significant amount of savings in the cooling infrastructure's power consumption is seen by reducing the CRAH's airflow rate. In many cases, these savings can be larger than providing higher temperature chilled water from the refrigeration units. Therefore, the path to realizing the industry's goal of higher IT equipment inlet temperatures to improve energy efficiency should be through both a reduction in air flow rate and increasing supply air temperatures and not necessarily through only higher CRAH supply air temperatures.


Author(s):  
James W. VanGilder ◽  
Xuanhang Zhang ◽  
Saurabh K. Shrivastava

The Partially Decoupled Aisle (PDA) method facilitates a near-real-time cooling-performance analysis of a single cluster of racks and, potentially, coolers, bounding a common hot or cold aisle in a data center. With the PDA method, the airflow patterns and related variables need be computed only within an isolated cold or hot aisle “on the fly” through CFD analysis or other means. The analysis is fast because the much larger surrounding room environment is not directly modeled; its effect enters the model through the boundary conditions applied to the top and ends of the isolated aisle. The proper boundary conditions in turn may be estimated from an empirical model determined in advance (“offline”) from the study of a large number of CFD simulations of varying equipment layouts and room environments. A software tool based on the PDA method, which uses a full CFD engine to solve the aisle airflow within the isolated aisle, can analyze a typical cluster of racks and coolers in 10–30 seconds and requires no special user skills. This paper formally introduces the general PDA method and shows several examples of its application with comparisons to corresponding whole-room CFD analyses.


Energies ◽  
2019 ◽  
Vol 12 (15) ◽  
pp. 2996 ◽  
Author(s):  
Jinkyun Cho ◽  
Beungyong Park ◽  
Yongdae Jeong

If a data center experiences a system outage or fault conditions, it becomes difficult to provide a stable and continuous information technology (IT) service. Therefore, it is critical to design and implement a backup system so that stability can be maintained even in emergency (unforeseen) situations. In this study, an actual 20 MW data center project was analyzed to evaluate the thermal performance of an IT server room during a cooling system outage under six fault conditions. In addition, a method of organizing and systematically managing operational stability and energy efficiency verification was identified for data center construction in accordance with the commissioning process. Up to a chilled water supply temperature of 17 °C and a computer room air handling unit air supply temperature of 24 °C, the temperature of the air flowing into the IT server room fell into the allowable range specified by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers standard (18–27 °C). It was possible to perform allowable operations for approximately 320 s after cooling system outage. Starting at a chilled water supply temperature of 18 °C and an air supply temperature of 25 °C, a rapid temperature increase occurred, which is a serious cause of IT equipment failure. Due to the use of cold aisle containment and designs with relatively high chilled water and air supply temperatures, there is a high possibility that a rapid temperature increase inside an IT server room will occur during a cooling system outage. Thus, the backup system must be activated within 300 s. It is essential to understand the operational characteristics of data centers and design optimal cooling systems to ensure the reliability of high-density data centers. In particular, it is necessary to consider these physical results and to perform an integrated review of the time required for emergency cooling equipment to operate as well as the backup system availability time.


2014 ◽  
Vol 137 (1) ◽  
Author(s):  
M. Alkhair ◽  
M. Y. Sulaiman ◽  
K. Sopian ◽  
C. H. Lim ◽  
E. Salleh ◽  
...  

The modeling of the performance of a one refrigeration ton (RT) solar assisted adsorption air-conditioning refrigeration system using activated carbon fiber/ethanol as the adsorbent/adsorbate pair has been undertaken in this study. The effects of hot water, cooling water, chilled water inlet temperatures, and hot water and chilled water flow rates were taken into consideration in the optimization of the system and in the design of the condenser, evaporator, and hot water storage tank. The study includes analysis of the weather data and its effect on both the adsorption system and the cooling load. This is then followed by estimation of the cooling capacity and coefficient of performance (COP) of the adsorption system as a function of the input parameters. The results of the model will be compared to experimental data in a next step.


Energies ◽  
2020 ◽  
Vol 13 (12) ◽  
pp. 3164
Author(s):  
Rasool Bukhsh ◽  
Muhammad Umar Javed ◽  
Aisha Fatima ◽  
Nadeem Javaid ◽  
Muhammad Shafiq ◽  
...  

The computing devices in data centers of cloud and fog remain in continues running cycle to provide services. The long execution state of large number of computing devices consumes a significant amount of power, which emits an equivalent amount of heat in the environment. The performance of the devices is compromised in heating environment. The high powered cooling systems are installed to cool the data centers. Accordingly, data centers demand high electricity for computing devices and cooling systems. Moreover, in Smart Grid (SG) managing energy consumption to reduce the electricity cost for consumers and minimum rely on fossil fuel based power supply (utility) is an interesting domain for researchers. The SG applications are time-sensitive. In this paper, fog based model is proposed for a community to ensure real-time energy management service provision. Three scenarios are implemented to analyze cost efficient energy management for power-users. In first scenario, community’s and fog’s power demand is fulfilled from the utility. In second scenario, community’s Renewable Energy Resources (RES) based Microgrid (MG) is integrated with the utility to meet the demand. In third scenario, the demand is fulfilled by integrating fog’s MG, community’s MG and the utility. In the scenarios, the energy demand of fog is evaluated with proposed mechanism. The required amount of energy to run computing devices against number of requests and amount of power require cooling down the devices are calculated to find energy demand by fog’s data center. The simulations of case studies show that the energy cost to meet the demand of the community and fog’s data center in third scenario is 15.09% and 1.2% more efficient as compared to first and second scenarios, respectively. In this paper, an energy contract is also proposed that ensures the participation of all power generating stakeholders. The results advocate the cost efficiency of proposed contract as compared to third scenario. The integration of RES reduce the energy cost and reduce emission of CO 2 . The simulations for energy management and plots of results are performed in Matlab. The simulation for fog’s resource management, measuring processing, and response time are performed in CloudAnalyst.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 1
Author(s):  
Gatla Vinay ◽  
T Pavan Kumar

Penetration testing is a specialized security auditing methodology where a tester simulates an attack on a secured system. The main theme of this paper itself reflects how one can collect the massive amount of log files which are generated among virtual datacenters in real time which in turn also posses invisible information with excessive organization value. Such testing usually ranges across all aspects concerned to log management across a number of servers among virtual data centers. In fact, Virtualization limits the costs by reducing the need for physical hardware systems. Instead, require high-end hardware for processing. In the real-time scenario, we usually come across multiple logs among VCenter, ESXi, a VM which is very typical for performing manual analysis with a bit more time-consuming. Instead of configuring secure-ids automatically in a Centralized log management server gains a powerful full insight. Along with using accurate search algorithms, fields searching, which includes title, author, and also content comes out of searching, sorting fields, multiple-index search with merged results simultaneously updates files, with joint results grouping automatically configures few plugs among search engine file formats were effective measures in an investigation. Finally, by using the Flexibility Network Security Monitor, Traffic Investigation, offensive detection, Log Recording, Distributed inquiry with full program's ability can export data to a variety of visualization dashboard which exactly needed for Log Investigations across Virtual Data Centers in real time.


Author(s):  
Saurabh K. Shrivastava ◽  
James W. VanGilder ◽  
Bahgat G. Sammakia

An analytical approach using artificial intelligence has been developed for assessing the cooling performance of data centers. This paper discusses the use of a Neural Network (NN) model in the real-time prediction of the cooling performance of a cluster of equipment in a data center environment. The NN model is used to predict the Capture Index (CI) [1] as a function of rack power, cooler airflow and physical/geometric arrangement for a cluster located in a simple room environment. The Neural Network is “trained” on thousands of hypothetical but realistic cluster variations for which CI values have been computed using either PDA [2] or full Computational Fluid Dynamics (CFD). The great value of the NN approach lies in its ability to capture the non-linear relationships between input parameters and corresponding capture indices. The accuracy of the NN approach is 3.8% (Root Mean Square Error) for a set of example scenarios discussed here. Because of the real-time nature of the calculations, the NN approach readily facilitates optimization studies. Example cases are discussed which show the integration of the NN approach and a genetic algorithm used for optimization.


Sign in / Sign up

Export Citation Format

Share Document