scholarly journals Large-scale parallel programming: experience with BBN butterfly parallel processor

Author(s):  
Thomas J. LeBlanc ◽  
Michael L. Scott ◽  
Christopher M. Brown
1988 ◽  
Vol 23 (9) ◽  
pp. 161-172 ◽  
Author(s):  
Thomas J. LeBlanc ◽  
Michael L. Scott ◽  
Christopher M. Brown

1987 ◽  
Vol 18 (6) ◽  
pp. 89-99 ◽  
Author(s):  
Hideki Asai ◽  
Mitsuo Asai ◽  
Mamoru Tanaka

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Sol Ji Kang ◽  
Sang Yeon Lee ◽  
Keon Myung Lee

With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.


2013 ◽  
Vol 303-306 ◽  
pp. 2165-2169
Author(s):  
Zheng Meng ◽  
Ying Lin ◽  
Yan Kang ◽  
Qian Yu

With the development of computer technology, multi-core programming is now becoming hot issues. Based on directed acyclic graph, this paper gives definition of a number of executable operations and establishes a parallel programming pattern. Using verticies to represent tasks and edges to represent communication between vertex, this parallel programming pattern let the programmers easily to identify the available concurrency and expose it for use in the algorithm design. The proposed pattern can be used for large-scale static data batch processing in multi-core environments and can bring lots of convenience when deal with complex issues.


The efficiency of parallel processors is achieved for the purpose of quick computing is mainly depending on the scheduling of activity. The most important factor for scheduling of activities are depend on the waiting time which is directly influence the computation time of overall activity. Minimizing the Variance of Waiting Time otherwise known as Waiting Time Variance (WTV) is one of the metrics of Quality of Services (QoS) which enhance the efficiency of activity scheduling. Allocate the activity from a set of activity pool and schedule them for each identical parallel processor for execution in a large scale by minimizing WTV is the main focusing area of this paper. In case of large scale computing activities are complex in nature. A prior knowledge of each activity must be known before the preparation of activity scheduling for efficient and rapid computing. A snake walks style of activity distribution among the parallel processor is presented in this paper for minimization problem. The minimization of WTV is measured with the help of three heuristic intend methods named as RSS, VS and BS. The results of the experiment are compared with current conspires and demonstrate the new snake style conspire is presenting the best practices for proven conspires and challenges in a wide range of activity. The algorithm's predictable findings appear as illustrated with graph.


Author(s):  
Zhi Shang

Usually simulations on environment flood issues will face the scalability problem of large scale parallel computing. The plain parallel technique based on pure MPI is difficult to have a good scalability due to the large number of domain partitioning. Therefore, the hybrid programming using MPI and OpenMP is introduced to deal with the issue of scalability. This kind of parallel technique can give a full play to the strengths of MPI and OpenMP. During the parallel computing, OpenMP is employed by its efficient fine grain parallel computing and MPI is used to perform the coarse grain parallel domain partitioning for data communications. Through the tests, the hybrid MPI/OpenMP parallel programming was used to renovate the finite element solvers in the BIEF library of Telemac. It was found that the hybrid programming is able to provide helps for Telemac to deal with the scalability issue.


Sign in / Sign up

Export Citation Format

Share Document