Energy Efficiency and Air Quality Considerations in Airside Economized Data Centers

Author(s):  
Levente J. Klein ◽  
Sergio A. Bermudez ◽  
Fernando J. Marianno ◽  
Hendrik F. Hamann ◽  
Prabjit Singh

Many data center operators are considering the option to convert from mechanical to free air cooling to improve energy efficiency. The main advantage of free air cooling is the elimination of chiller and Air Conditioning Unit operation when outdoor temperature falls below the data center temperature setpoint. Accidental introduction of gaseous pollutants in the data center along the fresh air and potential latency in response of control infrastructure to extreme events are some of the main concerns for adopting outside air cooling in data centers. Recent developments of ultra-high sensitivity corrosion sensors enable the real time monitoring of air quality and thus allow a better understanding of how airflow, relative humidity, and temperature fluctuations affect corrosion rates. Both the sensitivity of sensors and wireless networks ability to detect and react rapidly to any contamination event make them reliable tools to prevent corrosion related failures. A feasibility study is presented for eight legacy data centers that are evaluated to implement free air cooling.

2020 ◽  
Vol 142 (2) ◽  
Author(s):  
Oluwaseun Awe ◽  
Jimil M. Shah ◽  
Dereje Agonafer ◽  
Prabjit Singh ◽  
Naveen Kannan ◽  
...  

Abstract Airside economizers lower the operating cost of data centers by reducing or eliminating mechanical cooling. It, however, increases the risk of reliability degradation of information technology (IT) equipment due to contaminants. IT Equipment manufacturers have tested equipment performance and guarantee the reliability of their equipment in environments within ISA 71.04-2013 severity level G1 and the ASHRAE recommended temperature-relative humidity (RH) envelope. IT Equipment manufacturers require data center operators to meet all the specified conditions consistently before fulfilling warranty on equipment failure. To determine the reliability of electronic hardware in higher severity conditions, field data obtained from real data centers are required. In this study, a corrosion classification coupon experiment as per ISA 71.04-2013 was performed to determine the severity level of a research data center (RDC) located in an industrial area of hot and humid Dallas. The temperature-RH excursions were analyzed based on time series and weather data bin analysis using trend data for the duration of operation. After some period, a failure was recorded on two power distribution units (PDUs) located in the hot aisle. The damaged hardware and other hardware were evaluated, and cumulative corrosion damage study was carried out. The hypothetical estimation of the end of life of components is provided to determine free air-cooling hours for the site. There was no failure of even a single server operated with fresh air-cooling shows that using evaporative/free air cooling is not detrimental to IT equipment reliability. This study, however, must be repeated in other geographical locations to determine if the contamination effect is location dependent.


Author(s):  
Magnus K. Herrlin ◽  
Michael K. Patterson

Increased Information and Communications Technology (ICT) capability and improved energy-efficiency of today’s server platforms have created opportunities for the data center operator. However, these platforms also test the ability of many data center cooling systems. New design considerations are necessary to effectively cool high-density data centers. Challenges exist in both capital costs and operational costs in the thermal management of ICT equipment. This paper details how air cooling can be used to address both challenges to provide a low Total Cost of Ownership (TCO) and a highly energy-efficient design at high heat densities. We consider trends in heat generation from servers and how the resulting densities can be effectively cooled. A number of key factors are reviewed and appropriate design considerations developed to air cool 2000 W/ft2 (21,500 W/m2). Although there are requirements for greater engineering, such data centers can be built with current technology, hardware, and best practices. The density limitations are shown primarily from an airflow management and cooling system controls perspective. Computational Fluid Dynamics (CFD) modeling is discussed as a key part of the analysis allowing high-density designs to be successfully implemented. Well-engineered airflow management systems and control systems designed to minimize airflow by preventing mixing of cold and hot airflows allow high heat densities. Energy efficiency is gained by treating the whole equipment room as part of the airflow management strategy, making use of the extended environmental ranges now recommended and implementing air-side air economizers.


2013 ◽  
Vol 284-287 ◽  
pp. 3597-3603
Author(s):  
Cheng Jen Tang ◽  
Miau Ru Dai

Demand response (DR) is an important ingredient and regarded as the killer application of the emerging smart grid. The continuously growing energy consumption of data centers makes data centers promising candidates with significant potential for DR. Participating in DR programs makes data centers have another finical resource in addition to service income. On the other hand, some government organizations also offer considerable incentives to promote energy saving actions for facilities with some certain certifications. Leadership in Energy and Environmental Design (LEED) rating system developed by U.S. Green Building Council (USGBC) is one of the most popular certification systems. LEED uses Power Usage Effectiveness (PUE) as one of the metrics for quantifying how energy efficient a data center is. The goal of PUE is to improve energy efficiency of a data center. DR programs require participants to temporarily reduce their power demand on some occasions with little concern regarding energy efficiency. To enjoy incentives from LEED certification, data center administrators need to know whether the participation of DR hampers the established PUE of their facilities or not. This paper examines the power consumption models from prior studies, and identifies the constraints introduced by PUE for data centers participating in DR programs. The examination reveals that the ratios of static power consumption to the dynamic power demand range of different types of data center equipment do affect PUE while taking demand reduction efforts. With this finding, facility managers of data centers have a clear picture of what to expect from the DR participation, and what to adjust of their data center equipment.


2021 ◽  
pp. 163-174
Author(s):  
Levente Klein ◽  
Sergio Bermudez ◽  
Fernando Marianno ◽  
Hendrik Hamann

Author(s):  
Gautham Thirunavakkarasu ◽  
Satyam Saini ◽  
Jimil Shah ◽  
Dereje Agonafer

The percentage of the energy used by data centers for cooling their equipment has been on the rise. With that, there has been a necessity for exploring new and more efficient methods like airside economization, both from an engineering as well as business point of view, to contain this energy demand. Air cooling especially, free air cooling has always been the first choice for IT companies to cool their equipment. But, it has its downside as well. As per ASHRAE standard (2009b), the air which is entering the data center should be continuously filtered with MERV 11 or preferably MERV 13 filters and the air which is inside the data center should be clean as per ISO class 8. The objective of this study is to design a model data center and simulate the flow path with the help of 6sigma room analysis software. A high-density data center was modelled for both hot aisle and cold aisle containment configurations. The particles taken into consideration for modelling were spherical in shape and of diameters 0.05, 0.1 and 1 micron. The physical properties of the submicron particles have been assumed to be same as that of air. For heavier particles of 1 micron in size, the properties of dense carbon particle are chosen for simulating particulate contamination in a data center. The Computer Room Air Conditioning unit is modelled as the source for the particulate contaminants which represents contaminants entering along with free air through an air-side economizer. The data obtained from this analysis can be helpful in predicting which type of particles will be deposited at what location based on its distance from the source and weight of the particles. This can further help in reinforcing the regions with a potential to fail under particulate contamination.


Author(s):  
Abdlmonem H. Beitelmal ◽  
Drazen Fabris

New servers and data center metrics are introduced to facilitate proper evaluation of data centers power and cooling efficiency. These metrics will be used to help reduce the cost of operation and to provision data centers cooling resources. The most relevant variables for these metrics are identified and they are: the total facility power, the servers’ idle power, the average servers’ utilization, the cooling resources power and the total IT equipment power. These metrics can be used to characterize and classify servers and data centers performance and energy efficiency regardless of their size and location.


Author(s):  
Thomas J. Breen ◽  
Ed J. Walsh ◽  
Jeff Punch ◽  
Amip J. Shah ◽  
Niru Kumari ◽  
...  

As the energy footprint of data centers continues to increase, models that allow for “what-if” simulations of different data center design and management paradigms will be important. Prior work by the authors has described a multi-scale energy efficiency model that allows for evaluating the coefficient of performance of the data center ensemble (COPGrand), and demonstrated the utility of such a model for purposes of choosing operational set-points and evaluating design trade-offs. However, experimental validation of these models poses a challenge because of the complexity involved with tailoring such a model for implementation to legacy data centers, with shared infrastructure and limited control over IT workload. Further, test facilities with dummy heat loads or artificial racks in lieu of IT equipment generally have limited utility in validating end-to-end models owing to the inability of such loads to mimic phenomena such as fan scalability, etc. In this work, we describe the experimental analysis conducted in a special test chamber and data center facility. The chamber, focusing on system level effects, is loaded with an actual IT rack, and a compressor delivers chilled air to the chamber at a preset temperature. By varying the load in the IT rack as well as the air delivery parameters — such as flow rate, supply temperature, etc. — a setup which simulates the system level of a data center is created. Experimental tests within a live data center facility are also conducted where the operating conditions of the cooling infrastructure are monitored — such as fluid temperatures, flow rates, etc. — and can be analyzed to determine effects such as air flow recirculation, heat exchanger performance, etc. Using the experimental data a multi-scale model configuration emulating the data center can be defined. We compare the results from such experimental analysis to a multi-scale energy efficiency model of the data center, and discuss the accuracies as well as inaccuracies within such a model. Difficulties encountered in the experimental work are discussed. The paper concludes by discussing areas for improvement in such modeling and experimental evaluation. Further validation of the complete multi-scale data center energy model is planned.


2019 ◽  
Vol 11 (18) ◽  
pp. 4937 ◽  
Author(s):  
Jing Ni ◽  
Bowen Jin ◽  
Shanglei Ning ◽  
Xiaowei Wang

The energy consumption of fast-growing data centers is drawing attentions from not only energy organizations and institutions all over the world, but also charity groups, such as Greenpeace, and research shows that the power consumption of air conditioning makes up a large proportion of the electricity cost in data centers. Therefore, more detailed investigations of air conditioning power consumption are warranted. Three types of airflow distributions with different aisle layouts (the open aisle, the closed cold aisle, and the closed hot aisle) were investigated with Computational Fluid Dynamics (CFD) methods in a typical data center of four rows of racks in this study. To evaluate the results of thermal and bypass phenomenon, the temperature increase index (β) and the energy utilization index (ηr) were used. The simulations show that there is a better trend of the β index and ηr index both closed cold aisle and closed hot aisle compared with free open aisle. Especially with high air flow rate, the β index decreases and the ηr index increases considerably. Moreover, the results prove the closed aisles (both closed cold aisle and closed hot aisle) can not only significantly improve the airflow distribution, but also reduce the mixture of cold and heat flow, and therefore improve energy efficiency. In addition, it proves the design of the closed aisles can meet the increasing density of installations and our simulation method could evaluate the cooling capacity easily.


Author(s):  
Chandrakant Patel ◽  
Ratnesh Sharma ◽  
Cullen Bash ◽  
Sven Graupner

Computing will be pervasive, and enablers of pervasive computing will be data centers housing computing, networking and storage hardware. The data center of tomorrow is envisaged as one containing thousands of single board computing systems deployed in racks. A data center, with 1000 racks, over 30,000 square feet, would require 10 MW of power to power the computing infrastructure. At this power dissipation, an additional 5 MW would be needed by the cooling resources to remove the dissipated heat. At $100/MWh, the cooling alone would cost $4 million per annum for such a data center. The concept of Computing Grid, based on coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, is emerging as the new paradigm in distributed and pervasive computing for scientific as well as commercial applications. We envision a global network of data centers housing an aggregation of computing, networking and storage hardware. The increased compaction of such devices in data centers has created thermal and energy management issues that inhibit sustainability of such a global infrastructure. In this paper, we propose the framework of Energy Aware Grid that will provide a global utility infrastructure explicitly incorporating energy efficiency and thermal management among data centers. Designed around an energy-aware co-allocator, workload placement decisions will be made across the Grid, based on data center energy efficiency coefficients. The coefficient, evaluated by the data center’s resource allocation manager, is a complex function of the data center thermal management infrastructure and the seasonal and diurnal variations. A detailed procedure for implementation of a test case is provided with an estimate of energy savings to justify the economics. An example workload deployment shown in the paper aspires to seek the most energy efficient data center in the global network of data centers. The locality based energy efficiency in a data center is shown to arise from use of ground coupled loops in cold climates to lower ambient temperature for heat rejection e.g. computing and rejecting heat from a data center at nighttime ambient of 20°C. in New Delhi, India while Phoenix, USA is at 45°C. The efficiency in the cooling system in the data center in New Delhi is derived based on lower lift from evaporator to condenser. Besides the obvious advantage due to external ambient, the paper also incorporates techniques that rate the efficiency arising from internal thermo-fluids behavior of a data center in workload placement decision.


Sign in / Sign up

Export Citation Format

Share Document