BP-Wrapper: A System Framework Making Any Replacement Algorithms (Almost) Lock Contention Free

Author(s):  
Xiaoning Ding ◽  
Song Jiang ◽  
Xiaodong Zhang
2017 ◽  
Vol 36 (6) ◽  
pp. 1-13 ◽  
Author(s):  
Tao Yang ◽  
Jian Chang ◽  
Ming C. Lin ◽  
Ralph R. Martin ◽  
Jian J. Zhang ◽  
...  

2020 ◽  
Vol 53 (4) ◽  
pp. 35-41
Author(s):  
Ze Yang Wang ◽  
Rômulo Meira-Góes ◽  
Stéphane Lafortune ◽  
Raymond H. Kwong

2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


2004 ◽  
Vol 23 (2) ◽  
pp. 101-110
Author(s):  
Rexford H. Draman

Like a growing number of individuals, the author believes that business is facing the same bifurcation point physics was facing before the development of quantum mechanics – trying to fit a Newtonian explanation onto a non-Newtonian problem. Given that perspective business needs a system-based model not more research and development on its Newtonian practices and beliefs. The focus of this paper focuses is on the development of such a framework. This paper draws on science to identify the necessary requirements for a living system and converts that into a three-entity framework. Through the conversion of this living-system framework the necessary requirements for a living business system are identified. With that, an assortment of currently available system-based business tools and techniques that fulfill most of the requirements of a living business system, are introduced. An approach to implementing these tools and techniques as well as remaining open for the incorporation of other systems-based practices is presented.


Author(s):  
Zhuming Bi ◽  
Guoping Wang ◽  
Joel Thompson ◽  
David Ruiz ◽  
John Rosswurm ◽  
...  

2010 ◽  
Vol 102 (1-3) ◽  
pp. 15-29 ◽  
Author(s):  
Craig Rasmussen ◽  
Peter A. Troch ◽  
Jon Chorover ◽  
Paul Brooks ◽  
Jon Pelletier ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document