scholarly journals Automatic Data Distribution for Composite Grid Applications

1997 ◽  
Vol 6 (1) ◽  
pp. 95-113 ◽  
Author(s):  
Lorie M. Liebrock ◽  
Ken Kennedy

Problem topology is the key to efficient parallelization support for partially regular applications. Specifically, problem topology provides the information necessary for automatic data distribution and regular application optimization of a large class of partially regular applications. Problem topology is the connectivity of the problem. This research focuses on composite grid applications and strives to take advantage of their partial regularity in the parallelization and compilation process. Composite grid problems arise in important application areas, e.g., reactor and aerodynamic simulation. Related physical phenomena are inherently parallel and their simulations are computationally intensive. We present algorithms that automatically determine data distributions for composite grid problems. Our algorithm's alignment and distribution specifications may be used as input to a High Performance Fortran program to apply the mapping for execution of the simulation code. These algorithms eliminate the need for user-specified data distribution for this large class of complex topology problems. We test the algorithms using a number of topological descriptions from aerodynamic and water-cooled nuclear reactor simulations. Speedup-bound predictions with and without communication, based on the automatically generated distributions, indicate that significant speedups are possible using these algorithms.

1997 ◽  
Vol 6 (1) ◽  
pp. 73-94 ◽  
Author(s):  
Eduard AyguadÉ ◽  
Jordi Garcia ◽  
MercÉ GironÈs ◽  
M. Luz Grande ◽  
JesÚs Labarta

This article describes the main features and implementation of our automatic data distribution research tool. The tool (DDT) accepts programs written in Fortran 77 and generates High Performance Fortran (HPF) directives to map arrays onto the memories of the processors and parallelize loops, and executable statements to remap these arrays. DDT works by identifying a set of computational phases (procedures and loops). The algorithm builds a search space of candidate solutions for these phases which is explored looking for the combination that minimizes the overall cost; this cost includes data movement cost and computation cost. The movement cost reflects the cost of accessing remote data during the execution of a phase and the remapping costs that have to be paid in order to execute the phase with the selected mapping. The computation cost includes the cost of executing a phase in parallel according to the selected mapping and the owner computes rule. The tool supports interprocedural analysis and uses control flow information to identify how phases are sequenced during the execution of the application.


Author(s):  
Eduard Ayguadé ◽  
Jordi Garcia ◽  
Mercè Gironès ◽  
M. Luz Grande ◽  
Jesús Labarta

2014 ◽  
Vol 7 (4) ◽  
pp. 37-46 ◽  
Author(s):  
Xiaoyan Wang ◽  
Xu Fan ◽  
Jinchuan Chen ◽  
Xiaoyong Du

Sign in / Sign up

Export Citation Format

Share Document