Efficient Implementation of high-level parallel programs

1991 ◽  
Vol 19 (2) ◽  
pp. 142-151
Author(s):  
Rajive Bagrodia ◽  
Sharad Mathur
1991 ◽  
Vol 25 (Special Issue) ◽  
pp. 142-151
Author(s):  
Rajive Bagrodia ◽  
Sharad Mathur

1991 ◽  
Vol 26 (4) ◽  
pp. 142-151
Author(s):  
Rajive Bagrodia ◽  
Sharad Mathur

2021 ◽  
Vol 24 (1) ◽  
pp. 157-183
Author(s):  
Никита Андреевич Катаев

Automation of parallel programming is important at any stage of parallel program development. These stages include profiling of the original program, program transformation, which allows us to achieve higher performance after program parallelization, and, finally, construction and optimization of the parallel program. It is also important to choose a suitable parallel programming model to express parallelism available in a program. On the one hand, the parallel programming model should be capable to map the parallel program to a variety of existing hardware resources. On the other hand, it should simplify the development of the assistant tools and it should allow the user to explore the parallel program the assistant tools generate in a semi-automatic way. The SAPFOR (System FOR Automated Parallelization) system combines various approaches to automation of parallel programming. Moreover, it allows the user to guide the parallelization if necessary. SAPFOR produces parallel programs according to the high-level DVMH parallel programming model which simplify the development of efficient parallel programs for heterogeneous computing clusters. This paper focuses on the approach to semi-automatic parallel programming, which SAPFOR implements. We discuss the architecture of the system and present the interactive subsystem which is useful to guide the SAPFOR through program parallelization. We used the interactive subsystem to parallelize programs from the NAS Parallel Benchmarks in a semi-automatic way. Finally, we compare the performance of manually written parallel programs with programs the SAPFOR system builds.


2003 ◽  
Vol 13 (03) ◽  
pp. 473-484 ◽  
Author(s):  
KONRAD HINSEN

One of the main obstacles to a more widespread use of parallel computing in computational science is the difficulty of implementing, testing, and maintaining parallel programs. The combination of a simple parallel computation model, BSP, and a high-level programming language, Python, simplifies these tasks significantly. It allows the rapid development facilities of Python to be applied to parallel programs, providing interactive development as well as interactive debugging of parallel programs.


1996 ◽  
Vol 98 (3) ◽  
pp. 365-397 ◽  
Author(s):  
Indranil Dasgupta ◽  
Andrea Ruben Levi ◽  
Vittorio Lubicz ◽  
Claudio Rebbi

2003 ◽  
Vol 13 (03) ◽  
pp. 389-400 ◽  
Author(s):  
YIFENG CHEN ◽  
J. W. SANDERS

This paper studies top-down program development techniques for Bulk-Synchronous Parallelism. In that context a specification formalism LOGS, for 'the Logic of Global Synchrony', has been proposed for the specification and high-level development of BSP designs. This paper extends the use of LOGS to provide support for the protection of local variables in BSP programs, thus completing the link between specifications and programs.


2005 ◽  
Vol 15 (3) ◽  
pp. 351-352
Author(s):  
P. W. TRINDER

Engineering high-performance parallel programs is hard: not only must a correct, efficient and inherently-parallel algorithm be developed, but the computations must be effectively and efficiently coordinated across multiple processors. It has long been recognised that ideas and approaches drawn from functional programming may be particularly applicable to parallel and distributed computing (e.g. Wegner 1971). There are several reasons for this suitability. Concurrent stateless computations are much easier to coordinate, high-level coordination abstractions reduce programming effort, and declarative notations are amenable to reasoning, i.e. to optimising transformations, derivation and performance analysis.


Sign in / Sign up

Export Citation Format

Share Document