Efficient sparse matrix–vector multiplication using cache oblivious extension quadtree storage format

2016 ◽  
Vol 54 ◽  
pp. 490-500 ◽  
Author(s):  
Jilin Zhang ◽  
Jian Wan ◽  
Fangfang Li ◽  
Jie Mao ◽  
Li Zhuang ◽  
...  
2019 ◽  
Vol 76 (3) ◽  
pp. 2063-2081 ◽  
Author(s):  
Yishui Li ◽  
Peizhen Xie ◽  
Xinhai Chen ◽  
Jie Liu ◽  
Bo Yang ◽  
...  

2016 ◽  
Vol 26 (04) ◽  
pp. 1640001
Author(s):  
Jiaquan Gao ◽  
Yuanshen Zhou ◽  
Kesong Wu

Accelerating the sparse matrix-vector multiplication (SpMV) on the graphics processing units (GPUs) has attracted considerable attention recently. We observe that on a specific multiple-GPU platform, the SpMV performance can usually be greatly improved when a matrix is partitioned into several blocks according to a predetermined rule and each block is assigned to a GPU with an appropriate storage format. This motivates us to propose a novel multi-GPU parallel SpMV optimization model. Our model involves two stages. In the first stage, a simple rule is defined to divide any given matrix among multiple GPUs, and then a performance model, which is independent of the problems and dependent on the resources of devices, is proposed to accurately predict the execution time of SpMV kernels. Using these models, we construct in the second stage an optimally multi-GPU parallel SpMV algorithm that is automatically and rapidly generated for the platform for any problem. Given that our model for SpMV is general, independent of the problems, and dependent on the resources of devices, this model is constructed only once for each type of GPU. The experiments validate the high efficiency of our proposed model.


Sign in / Sign up

Export Citation Format

Share Document