Performance Modeling and Practical Use Cases for Black-Box SSDs

2021 ◽  
Vol 17 (2) ◽  
pp. 1-38
Author(s):  
Joonsung Kim ◽  
Kanghyun Choi ◽  
Wonsik Lee ◽  
Jangwoo Kim

Modern servers are actively deploying Solid-State Drives (SSDs) thanks to their high throughput and low latency. However, current server architects cannot achieve the full performance potential of commodity SSDs, as SSDs are complex devices designed for specific goals (e.g., latency, throughput, endurance, cost) with their internal mechanisms undisclosed to users. In this article, we propose SSDcheck , a novel SSD performance model to extract various internal mechanisms and predict the latency of next access to commodity black-box SSDs. We identify key performance-critical features (e.g., garbage collection, write buffering) and find their parameters (i.e., size, threshold) from each SSD by using our novel diagnosis code snippets. Then, SSDcheck constructs a performance model for a target SSD and dynamically manages the model to predict the latency of the next access. In addition, SSDcheck extracts and provides other useful internal mechanisms (e.g., fetch unit in multi-queue SSDs, background tasks triggering idle-time interval) for the storage system to fully exploit SSDs. By using those useful features and the performance model, we propose multiple practical use cases. Our evaluations show that SSDcheck’s performance model is highly accurate, and proposed use cases achieve significant performance improvement in various scenarios.

Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 847
Author(s):  
Sopanhapich Chum ◽  
Heekwon Park ◽  
Jongmoo Choi

This paper proposes a new resource management scheme that supports SLA (Service-Level Agreement) in a bigdata distributed storage system. Basically, it makes use of two mapping modes, isolated mode and shared mode, in an adaptive manner. In specific, to ensure different QoS (Quality of Service) requirements among clients, it isolates storage devices so that urgent clients are not interfered by normal clients. When there is no urgent client, it switches to the shared mode so that normal clients can access all storage devices, thus achieving full performance. To provide this adaptability effectively, it devises two techniques, called logical cluster and normal inclusion. In addition, this paper explores how to exploit heterogeneous storage devices, HDDs (Hard Disk Drives) and SSDs (Solid State Drives), to support SLA. It examines two use cases and observes that separating data and metadata into different devices gives a positive impact on the performance per cost ratio. Real implementation-based evaluation results show that this proposal can satisfy the requirements of diverse clients and can provide better performance compared with a fixed mapping-based scheme.


2020 ◽  
Vol 245 ◽  
pp. 04017
Author(s):  
Dario Barberis ◽  
Igor Aleksandrov ◽  
Evgeny Alexandrov ◽  
Zbigniew Baranowski ◽  
Gancho Dimitrov ◽  
...  

The ATLAS EventIndex was designed in 2012-2013 to provide a global event catalogue and limited event-level metadata for ATLAS analysis groups and users during the LHC Run 2 (2015-2018). It provides a good and reliable service for the initial use cases (mainly event picking) and several additional ones, such as production consistency checks, duplicate event detection and measurements of the overlaps of trigger chains and derivation datasets. The LHC Run 3, starting in 2021, will see increased data-taking and simulation production rates, with which the current infrastructure would still cope but may be stretched to its limits by the end of Run 3. This proceeding describes the implementation of a new core storage service that will be able to provide at least the same functionality as the current one for increased data ingestion and search rates, and with increasing volumes of stored data. It is based on a set of HBase tables, with schemas derived from the current Oracle implementation, coupled to Apache Phoenix for data access; in this way we will add to the advantages of a BigData based storage system the possibility of SQL as well as NoSQL data access, allowing to re-use most of the existing code for metadata integration.


2013 ◽  
Vol 8-9 ◽  
pp. 185-194 ◽  
Author(s):  
Bogdan Tomoiaga ◽  
Mircea D. Chindris ◽  
Andreas Sumper ◽  
Mousa Marzband

The concept of microgrid was first introduced in 2001 as a solution for reliable integration of distributed generation and for harnessing their multiple advantages. Specific control and energy management systems must be designed for the microgrid operation in order to ensure reliable, secure and economical operation; either in grid-connected or stand-alone operating mode. The problem of energy management in microgrids consists of finding the optimal or near optimal unit commitment and dispatch of the available sources and energy storage systems so that certain selected criteria are achieved. In most cases, energy management problem do not satisfy the Bellman's principle of optimality because of the energy storage systems. Consequently, in this paper, an original fast heuristic algorithm for the energy management on stand-alone microgrids, which avoids wastage of the existing renewable potential at each time interval, is presented. A typical test microgrid has been analysed in order to demonstrate the accuracy and the promptness of the proposed algorithm. The obtained cost of energy is low (the quality of the solution is high), the primary adjustment reserve is correspondingly assured by the energy storage system and the execution runtime is very short (a fast algorithm). Furthermore, the proposed algorithm can be used for real-time energy management systems.


2017 ◽  
Vol 3 (6) ◽  
pp. 404
Author(s):  
Dwi M. Syabani ◽  
Hana Eliyani ◽  
Suharsono Suharsono ◽  
Fedik A. Rantam ◽  
Anwar Ma’ruf

Estimation of Postmortem is one of the challenges in forensic science. The aim of this study was to construct a  MARS  model of  Postmortem interval  estimation time (PMT)  from  algor mortis temperature in Rat. Sixteen healthy male rats (Rattus norvegicus), onemonth old and weigh 100 gram were randomly divided into two groups (eight/each group) and were acclimated respectively among the ambient room (temperature over 28ºC) and at the conditioning room (temperature over 20ºC). The animals then were sacrificed in two days (four rats/day  for each divided room) then algor mortis by rectal temperature  were recorded after death at 0 and 2,4,6,8, 10,12, 14,16, 18,20 till 22 h respectively. The MARS model is nonlinear regression but performed as a multilinier curve that can have splines fitting and be defined as function model Y = 35.321 + 1.253 * BF1 + 0.436 * BF2 - 1.319 * BF3; and on 20ºC condition room as Y = 29.980 + 1.354 * BF1 + 0.799 * BF2 - 1.347 * BF3. Therefore,  performance model was comprised by multilinier  curve, then function model of  algor mortis on ambient  room be defined into three PMT intervals i.e: 1)Y=37,94 -0.11*(0-2h)  (p>0.00); 2) Y = 40.88 - 1.87* (2-6h) ( p<0.00) and 3) Y=30.82-0.09*(6-22h)  (p<0.00)  while on 20ºC condition room, was : 1)Y = 34.78-0.09* (0-2h) (p<0.00) ; 2) Y = 37.97-2.38* (2-6h) (p<0.00) and  3)Y = 25.36-0.04* 6-22 h (p>0.00). The acceleration of the declining algor mortis at conditioning room showed steeper than on ambient room at 2-6h PMT interval   (ß : 2,38 vs  1,87). Postmortem Time Interval Estimation from  Algormortis Temperature of Rats could be expressed by MARS Model. The pattern model of estimation comprised by multilinear curve with splines was fitted at both of the experimental rooms.Keywords : Postmortem time interval, algor mortis, MARS model estimation


2020 ◽  
pp. 082-093
Author(s):  
S.Yu. Punda ◽  
◽  

A review of modern data storage architectures was conducted, the advantages and disadvantages of each of them were given. The data storage systems of the IBM FlashSystem family were analyzed, as well as Spectrum Virtualize software, which is responsible for virtualization, compression, distribution and replication of data stored on the storage system. A mathematical model of the data storage system of IBM Storwize v5030E was developed. Well-known metrics are used to evaluate its performance when using spindle and solid-state drives. The effect of hardware and software data compression on system performance has been experimentally revealed. Recommendations are formulated by which it is possible to determine which media and which technology stack should be used by a business user to complete the tasks assigned to him.


2017 ◽  
Vol 139 (5) ◽  
Author(s):  
Markus Schnoes ◽  
Eberhard Nicke

Airfoil shapes tailored to specific inflow conditions and loading requirements can offer a significant performance potential over classic airfoil shapes. However, their optimal operating range has to be matched thoroughly to the overall compressor layout. This paper describes methods to organize a large set of optimized airfoils in a database and its application in the throughflow design. Optimized airfoils are structured in five dimensions: inlet Mach number, blade stagger angle, pitch–chord ratio, maximum thickness–chord ratio, and a parameter for aerodynamic loading. In this space, a high number of airfoil geometries are generated by means of numerical optimization. During the optimization of each airfoil, the performance at design and off-design conditions is evaluated with the blade-to-blade flow solver MISES. Together with the airfoil geometry, the database stores automatically calibrated correlations which describe the cascade performance in throughflow calculation. Based on these methods, two subsonic stages of a 4.5-stage transonic research compressor are redesigned. Performance of the baseline and updated geometry is evaluated with 3D CFD. The overall approach offers accurate throughflow design incorporating optimized airfoil shapes and a fast transition from throughflow to 3D CFD design.


2020 ◽  
Vol 142 (4) ◽  
Author(s):  
Yasir M. Alfulayyih ◽  
Peiwen Li ◽  
Ammar Omar Gwesha

Abstract An algorithm and modeling are developed to make precise planning of year-round solar energy (SE) collection, storage, and redistribution to meet a decided demand of electrical power fully relying on solar energy. The model takes the past 10 years’ data of average and worst-case sky coverage (clouds fraction) condition of a location at a time interval (window) of per 6 min in every day to predict solar energy and electrical energy harvest. The electrical energy obtained from solar energy in sunny times must meet the instantaneous energy demand and also the need for energy storage for nighttime and overcast days, so that no single day will have a shortage of energy supply in the entire year and yearly cycles. The analysis can eventually determine a best starting date of operation, a least solar collection area, and a least energy storage capacity for cost-effectiveness of the system. The algorithm provides a fundamental tool for the design of a general renewable energy harvest and storage system for non-interrupted year-round power supply. As an example, the algorithm was applied for the authors’ local city, Tucson, Arizona of the U.S. for a steady power supply of 1 MW.


Sign in / Sign up

Export Citation Format

Share Document