scholarly journals Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

2004 ◽  
Vol 12 (2) ◽  
pp. 91-100 ◽  
Author(s):  
H.X. Lin

Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism) can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm).

1995 ◽  
Vol 23 (1) ◽  
pp. 63-81 ◽  
Author(s):  
Dror E. Maydan ◽  
John L. Hennessy ◽  
Monica S. Lam

1998 ◽  
Vol 24 (3-4) ◽  
pp. 505-525 ◽  
Author(s):  
Wolfram Amme ◽  
Eberhard Zehendner

Sign in / Sign up

Export Citation Format

Share Document