scholarly journals Broadcasting on large scale heterogeneous platforms under the bounded multi-port model

Author(s):  
Olivier Beaumont ◽  
Lionel Eyraud-Dubois ◽  
Shailesh Kumar Agrawal
Author(s):  
Yu-Cheng Chou ◽  
Harry H. Cheng

Message Passing Interface (MPI) is a standardized library specification designed for message-passing parallel programming on large-scale distributed systems. A number of MPI libraries have been implemented to allow users to develop portable programs using the scientific programming languages, Fortran, C and C++. Ch is an embeddable C/C++ interpreter that provides an interpretive environment for C/C++ based scripts and programs. Combining Ch with any MPI C/C++ library provides the functionality for rapid development of MPI C/C++ programs without compilation. In this article, the method of interfacing Ch scripts with MPI C implementations is introduced by using the MPICH2 C library as an example. The MPICH2-based Ch MPI package provides users with the ability to interpretively run MPI C program based on the MPICH2 C library. Running MPI programs through the MPICH2-based Ch MPI package across heterogeneous platforms consisting of Linux and Windows machines is illustrated. Comparisons for the bandwidth, latency, and parallel computation speedup between C MPI, Ch MPI, and MPI for Python in an Ethernet-based environment comprising identical Linux machines are presented. A Web-based example is given to demonstrate the use of Ch and MPICH2 in C based CGI scripting to facilitate the development of Web-based applications for parallel computing.


2014 ◽  
Vol 25 (10) ◽  
pp. 2520-2528 ◽  
Author(s):  
Olivier Beaumont ◽  
Nicolas Bonichon ◽  
Lionel Eyraud-Dubois ◽  
Przemyslaw Uznanski ◽  
Shailesh Kumar Agrawal

2009 ◽  
Vol 19 (03) ◽  
pp. 383-397 ◽  
Author(s):  
ANNE BENOIT ◽  
YVES ROBERT ◽  
ERIC THIERRY

In this paper, we explore the problem of mapping linear chain applications onto large-scale heterogeneous platforms. A series of data sets enter the input stage and progress from stage to stage until the final result is computed. An important optimization criterion that should be considered in such a framework is the latency, or makespan, which measures the response time of the system in order to process one single data set entirely. For such applications, which are representative of a broad class of real-life applications, we can consider one-to-one mappings, in which each stage is mapped onto a single processor. However, in order to reduce the communication cost, it seems natural to group stages into intervals. The interval mapping problem can be solved in a straightforward way if the platform has homogeneous communications: the whole chain is grouped into a single interval, which in turn is mapped onto the fastest processor. But the problem becomes harder when considering a fully heterogeneous platform. Indeed, we prove the NP-completeness of this problem. Furthermore, we prove that neither the interval mapping problem nor the similar one-to-one mapping problem can be approximated in polynomial time by any constant factor (unless P=NP).


Author(s):  
Ilya Afanasyev ◽  
Alexander Daryin ◽  
Jack Dongarra ◽  
Dmitry Nikitenko ◽  
Alexey Teplov ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document