Metrics and an Infrastructure Model to Evaluate Data Center Efficiency

Author(s):  
Christian L. Belady ◽  
Christopher G. Malone

This work describes two data center efficiency metrics: Power Usage Effectiveness (PUE) and Compute Power Efficiency (CPE). PUE characterizes the fraction of the total data center power used for IT work. CPE characterizes the overall data center efficiency, considering IT equipment utilization as well as how power is used in the data center. The PUE results from three data center studies are presented here. The data suggests that a carefully designed and managed data center has a PUE of 2.0. More studies are required to determine the range of values for the typical data center. A data center infrastructure and energy cost model is presented to compare hardware costs to infrastructure and energy costs. The impact of PUE on these costs is examined to illustrate the impact of data center efficiency on the total cost of operating a data center.

Author(s):  
Tahir Cader ◽  
Levi Westra ◽  
Andres Marquez

Although semiconductor manufacturers have provided temporary relief with lower-power multi-core microprocessors, OEMs and data center operators continue to push the limits for individual rack power densities. It is not uncommon today for data center operators to deploy multiple 20 kW racks in a facility. Such rack densities are exacerbating the major issues of power and cooling in data centers. Data center operators are now forced to take a hard look at the efficiencies of their data centers. Malone and Belady (2006) have proposed three metrics, i.e., Power Usage Effectiveness (PUE), Data Center Efficiency (DCE), and the Energy-to-Acquisition Cost ratio (EAC), to help data center operators quickly quantify the efficiency of their data centers. In their paper, Malone and Belady present nominal values of PUE across a broad cross-section of data centers. PUE values are presented for data centers at four levels of optimization. One of these optimizations involves the use of Computational Fluid Dynamics (CFD). In the current paper, CFD is used to conduct an in-depth investigation of a liquid-cooled data center that would potentially be housed at the Pacific Northwest National Labs (PNNL). The boundary conditions used in the CFD model are based upon actual measurements on a rack of liquid-cooled servers housed at PNNL. The analysis shows that the liquid-cooled facility could achieve a PUE of 1.57 as compared to a PUE of 3.0 for a typical data center (the lower the PUE, the better, with values below 1.6 approaching ideal). The increase in data center efficiency is also translated into an increase in the amount of IT equipment that can be deployed. At a PUE of 1.57, the analysis shows that 91% more IT equipment can be deployed as compared to the typical data center. The paper will discuss the analysis of the PUE, and will also explore the impact of the raising data center efficiency via the use of multiple cooling technologies and CFD analysis. Complete results of the analyses will be presented in the paper.


Author(s):  
Jiawei Huang ◽  
Shiqi Wang ◽  
Shuping Li ◽  
Shaojun Zou ◽  
Jinbin Hu ◽  
...  

AbstractModern data center networks typically adopt multi-rooted tree topologies such leaf-spine and fat-tree to provide high bisection bandwidth. Load balancing is critical to achieve low latency and high throughput. Although the per-packet schemes such as Random Packet Spraying (RPS) can achieve high network utilization and near-optimal tail latency in symmetric topologies, they are prone to cause significant packet reordering and degrade the network performance. Moreover, some coding-based schemes are proposed to alleviate the problem of packet reordering and loss. Unfortunately, these schemes ignore the traffic characteristics of data center network and cannot achieve good network performance. In this paper, we propose a Heterogeneous Traffic-aware Partition Coding named HTPC to eliminate the impact of packet reordering and improve the performance of short and long flows. HTPC smoothly adjusts the number of redundant packets based on the multi-path congestion information and the traffic characteristics so that the tailing probability of short flows and the timeout probability of long flows can be reduced. Through a series of large-scale NS2 simulations, we demonstrate that HTPC reduces average flow completion time by up to 60% compared with the state-of-the-art mechanisms.


Proceedings ◽  
2021 ◽  
Vol 68 (1) ◽  
pp. 13
Author(s):  
Yixuan Sun ◽  
Stephen Beeby

This paper presents the COMSOL simulations of magnetically coupled resonant wireless power transfer (WPT), using simplified coil models for embroidered planar two-coil and four-coil systems. The power transmission of both systems is studied and compared by varying the separation, rotation angle and misalignment distance at resonance (5 MHz). The frequency splitting occurs at short separations from both the two-coil and four-coil systems, resulting in lower power transmission. Therefore, the systems are driven from 4 MHz to 6 MHz to analyze the impact of frequency splitting at close separations. The results show that both systems had a peak efficiency over 90% after tuning to the proper frequency to overcome the frequency splitting phenomenon at close separations below 10 cm. The four-coil design achieved higher power efficiency at separations over 10 cm. The power efficiency of both systems decreased linearly when the axial misalignment was over 4 cm or the misalignment angle between receiver and transmitter was over 45 degrees.


2014 ◽  
Vol 95 (12) ◽  
pp. 1835-1848 ◽  
Author(s):  
Michael F. Squires ◽  
Jay H. Lawrimore ◽  
Richard R. Heim ◽  
David A. Robinson ◽  
Mathieu R. Gerbush ◽  
...  

This paper describes a new snowfall index that quantifies the impact of snowstorms within six climate regions in the United States. The regional snowfall index (RSI) is based on the spatial extent of snowfall accumulation, the amount of snowfall, and the juxtaposition of these elements with population. Including population information provides a measure of the societal susceptibility for each region. The RSI is an evolution of the Northeast snowfall impact scale (NESIS), which NOAA's National Climatic Data Center began producing operationally in 2006. While NESIS was developed for storms that had a major impact in the Northeast, it includes all snowfall during the lifetime of a storm across the United States and as such can be thought of as a quasi-national index that is calibrated to Northeast snowstorms. By contrast, the RSI is a regional index calibrated to specific regions using only the snow that falls within that region. This paper describes the methodology used to compute the RSI, which requires region-specific parameters and thresholds, and its application within six climate regions in the eastern two-thirds of the nation. The process used to select the region-specific parameters and thresholds is explained. The new index has been calculated for over 580 snowstorms that occurred between 1900 and 2013 providing a century-scale historical perspective for these snowstorms. The RSI is computed for category 1 or greater storms in near–real time, usually a day after the storm has ended.


Author(s):  
James W. VanGilder ◽  
Zachary R. Sheffer ◽  
Xuanhang Simon Zhang ◽  
Collyn T. O’Kane

Typical data center architectures utilize a raised floor; cooling airflow is pumped into an under-floor plenum and exits through perforated floor tiles located in front of IT equipment racks. The under-floor space is also a convenient place to locate critical building infrastructure, such as chilled-water piping and power and network cabling. Unfortunately, the presence of such objects can disrupt the distribution of cooling airflow. While the effects of other design parameters, such as room layout, plenum depth, perforated tile type, and leakage paths, have been systematically studied — and corresponding best-practices outlined, there is no specific advice in the literature with regard to the effect of under-floor infrastructure on airflow distribution. This paper studies the effects of such obstructions primarily through CFD analyses of several layouts based on actual facilities. Additionally, corresponding scenarios are analyzed using a Potential Flow Model (PFM), which includes a recently-proposed obstruction-modeling technique. It is found that under-floor obstructions significantly affect airflow distribution only when they are located very near perforated tiles and cooling units and occupy a substantial fraction of the total plenum depth.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jennifer Bray ◽  
Dawn Brooker ◽  
Isabelle Latham ◽  
Darrin Baines

Purpose The purpose of this paper is to populate a theoretical cost model with real-world data, calculating staffing, resource and consumable costs of delivering Namaste Care Intervention UK (NCI-UK) sessions versus “usual care” for care home residents with advanced dementia. Design/methodology/approach Data from five care homes delivering NCI-UK sessions populated the cost model to generate session- and resident-level costs. Comparator usual care costs were calculated based on expert opinion and observational data. Outcome data for residents assessed the impact of NCI-UK sessions and aligned with the resident-level costs of NCI-UK. Findings NCI-UK had a positive impact on residents’ physical, social and emotional well-being. An average NCI-UK group session cost £220.53, 22% more than usual care, and ran for 1.5–2 h per day for 4–9 residents. No additional staff were employed to deliver NCI-UK, but staff-resident ratios were higher during Namaste Care. Usual care costs were calculated for the same time period when no group activity was organised. The average cost per resident, per NCI-UK session was £38.01, £7.24 more than usual care. In reality, costs were offset by consumables and resources being available from stock within a home. Originality/value Activity costs are rarely calculated as the focus tends to be on impact and outcomes. This paper shows that, although not cost neutral as previously thought, NCI-UK is a low-cost way of improving the lives of people living with advanced dementia in care homes.


Author(s):  
Sheng Kang ◽  
Guofeng Chen ◽  
Chun Wang ◽  
Ruiquan Ding ◽  
Jiajun Zhang ◽  
...  

With the advent of big data and cloud computing solutions, enterprise demand for servers is increasing. There is especially high growth for Intel based x86 server platforms. Today’s datacenters are in constant pursuit of high performance/high availability computing solutions coupled with low power consumption and low heat generation and the ability to manage all of this through advanced telemetry data gathering. This paper showcases one such solution of an updated rack and server architecture that promises such improvements. The ability to manage server and data center power consumption and cooling more completely is critical in effectively managing datacenter costs and reducing the PUE in the data center. Traditional Intel based 1U and 2U form factor servers have existed in the data center for decades. These general purpose x86 server designs by the major OEM’s are, for all practical purposes, very similar in their power consumption and thermal output. Power supplies and thermal designs for server in the past have not been optimized for high efficiency. In addition, IT managers need to know more information about servers in order to optimize data center cooling and power use, an improved server/rack design needs to be built to take advantage of more efficient power supplies or PDU’s and more efficient means of cooling server compute resources than from traditional internal server fans. This is the constant pursuit of corporations looking at new ways to improving efficiency and gaining a competitive advantage. A new way to optimize power consumption and improve cooling is a complete redesign of the traditional server rack. Extracting internal server power supplies and server fans and centralizing these within the rack aims to achieve this goal. This type of design achieves an entirely new low power target by utilizing centralized, high efficiency PDU’s that power all servers within the rack. Cooling is improved by also utilizing large efficient rack based fans for airflow to all servers. Also, opening up the server design is to allow greater airflow across server components for improved cooling. This centralized power supply breaks through the traditional server power limits. Rack based PDU’s can adjust the power efficiency to a more optimum point. Combine this with the use of online + offline modes within one single power supply. Cold backup makes data center power to achieve optimal power efficiency. In addition, unifying the mechanical structure and thermal definitions within the rack solution for server cooling and PSU information allows IT to collect all server power and thermal information centrally for improved ease in analyzing and processing.


Author(s):  
Muhammad Ishaq ◽  
Mohammad Kaleem ◽  
Numan Kifayat

This chapter briefly introduces the data center network and reviews the challenges for future intra-data-center networks in terms of scalability, cost effectiveness, power efficiency, upgrade cost, and bandwidth utilization. Current data center network architecture is discussed in detail and the drawbacks are pointed out in terms of the above-mentioned parameters. A detailed background is provided that how the technology moved from opaque to transparent optical networks. Additionally, it includes different data center network architectures proposed so far by different researchers/team/companies in order to address the current problems and meet the demands of future intra-data-center networks.


Author(s):  
Nenad Jukic ◽  
Miguel Velasco

Defining data warehouse requirements is widely recognized as one of the most important steps in the larger data warehouse system development process. This paper examines the potential risks and pitfalls within the data warehouse requirement collection and definition process. A real scenario of a large-scale data warehouse implementation is given, and details of this project, which ultimately failed due to inadequate requirement collection and definition process, are described. The presented case underscores and illustrates the impact of the requirement collection and definition process on the data warehouse implementation, while the case is analyzed within the context of the existing approaches, methodologies, and best practices for prevention and avoidance of typical data warehouse requirement errors and oversights.


2019 ◽  
Vol 1304 ◽  
pp. 012022
Author(s):  
Jianwen Huang ◽  
Cheng Chen ◽  
Guiyang Guo ◽  
Zhang Zhang ◽  
Zhen Li

Sign in / Sign up

Export Citation Format

Share Document