data contention
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Vol 50 (1) ◽  
pp. 15-22
Author(s):  
Erfan Zamanian ◽  
Julian Shun ◽  
Carsten Binnig ◽  
Tim Kraska

Distributed transactions on high-overhead TCP/IP-based networks were conventionally considered to be prohibitively expensive. In fact, the primary goal of existing partitioning schemes is to minimize the number of cross-partition transactions. However, with the new generation of fast RDMAenabled networks, this assumption is no longer valid. In this paper, we first make the case that the new bottleneck which hinders truly scalable transaction processing in modern RDMA-enabled databases is data contention, and that optimizing for data contention leads to different partitioning layouts than optimizing for the number of distributed transactions. We then present Chiller, a new approach to data partitioning and transaction execution, which aims to minimize data contention for both local and distributed transactions.


Author(s):  
Sarvesh Pandey ◽  
Udai Shanker

The Equal slack (EQS) heuristic is one of the widely used priority assignment heuristics. However, it severely suffers from the problems of intensive data contention, deadlock, and cyclic restart. To overcome some of the above problems, this chapter proposes a Most Dependent Transaction First (MDTF) priority heuristic that injects the size of dependent transactions of all directly competing transactions (that have requested access to the conflicting data item) in their priority computation. The MDTF heuristic efficiently reduces the data contentions among concurrently executing cohorts; and thus, it reduces the wastage of the system resources. This dynamic cohort priority assignment heuristic reduces the data contention considerably by utilizing the information about the dependency size of cohort(s). Doing this will make it easy for a currently executing cohort to better assess the level of data contention with absolutely no extra communication and time overhead. Such detailed dependency information is very useful to efficiently assign priorities to the cohorts.


1993 ◽  
Vol 02 (03) ◽  
pp. 431-457
Author(s):  
JAU-HWANG WANG ◽  
JAIDEEP SRIVASTAVA ◽  
WEI TEK TSAI

Parallel-rule firing approaches have been proposed to improve the performance of production systems. However, few models have been developed to measure the performance of parallel-rule firing approaches. There are three approaches to evaluating system performance, namely, analytical modeling, simulation, and system prototyping or implementation. Analytical modeling has its advantage in being cost effective and providing a powerful tool for studying various aspects of system performance by varying system parameters. Since analytical modeling has been very successfully used in evaluating database system performance, we develop such a model for parallel production systems, where rule firing is modeled as a transaction. Both resource contention and data contention are modeled in detail and the performance of locking, timestamp and optimistic approaches is analyzed. We show that significant speedup can be gained in parallel rule execution. Our main contribution is in the insights into parallel-rule firing provided by parametric modeling.


Sign in / Sign up

Export Citation Format

Share Document