scholarly journals AOmpLib: An Aspect Library for Large-Scale Multi-core Parallel Programming

Author(s):  
Bruno Medeiros ◽  
Joao L. Sobral
2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Sol Ji Kang ◽  
Sang Yeon Lee ◽  
Keon Myung Lee

With problem size and complexity increasing, several parallel and distributed programming models and frameworks have been developed to efficiently handle such problems. This paper briefly reviews the parallel computing models and describes three widely recognized parallel programming frameworks: OpenMP, MPI, and MapReduce. OpenMP is the de facto standard for parallel programming on shared memory systems. MPI is the de facto industry standard for distributed memory systems. MapReduce framework has become the de facto standard for large scale data-intensive applications. Qualitative pros and cons of each framework are known, but quantitative performance indexes help get a good picture of which framework to use for the applications. As benchmark problems to compare those frameworks, two problems are chosen: all-pairs-shortest-path problem and data join problem. This paper presents the parallel programs for the problems implemented on the three frameworks, respectively. It shows the experiment results on a cluster of computers. It also discusses which is the right tool for the jobs by analyzing the characteristics and performance of the paradigms.


2013 ◽  
Vol 303-306 ◽  
pp. 2165-2169
Author(s):  
Zheng Meng ◽  
Ying Lin ◽  
Yan Kang ◽  
Qian Yu

With the development of computer technology, multi-core programming is now becoming hot issues. Based on directed acyclic graph, this paper gives definition of a number of executable operations and establishes a parallel programming pattern. Using verticies to represent tasks and edges to represent communication between vertex, this parallel programming pattern let the programmers easily to identify the available concurrency and expose it for use in the algorithm design. The proposed pattern can be used for large-scale static data batch processing in multi-core environments and can bring lots of convenience when deal with complex issues.


Author(s):  
Zhi Shang

Usually simulations on environment flood issues will face the scalability problem of large scale parallel computing. The plain parallel technique based on pure MPI is difficult to have a good scalability due to the large number of domain partitioning. Therefore, the hybrid programming using MPI and OpenMP is introduced to deal with the issue of scalability. This kind of parallel technique can give a full play to the strengths of MPI and OpenMP. During the parallel computing, OpenMP is employed by its efficient fine grain parallel computing and MPI is used to perform the coarse grain parallel domain partitioning for data communications. Through the tests, the hybrid MPI/OpenMP parallel programming was used to renovate the finite element solvers in the BIEF library of Telemac. It was found that the hybrid programming is able to provide helps for Telemac to deal with the scalability issue.


2013 ◽  
Vol 23 (04) ◽  
pp. 1340010 ◽  
Author(s):  
R. F. BARRETT ◽  
C. T. VAUGHAN ◽  
S. D. HAMMOND ◽  
D. ROWETH

For over two decades the dominant means for enabling portable performance of computational science and engineering applications on parallel processing architectures has been the bulk-synchronous parallel programming (BSP) model. Code developers, motivated by performance considerations to minimize the number of messages transmitted, have typically pursued a strategy of aggregating message data into fewer, larger messages. Emerging and future high-performance architectures, especially those seen as targeting Exascale capabilities, provide motivation and capabilities for revisiting this approach. In this paper we explore alternative configurations within the context of a large-scale complex multi-physics application and a proxy that represents its behavior, presenting results that demonstrate some important advantages as the number of processors increases in scale.


1988 ◽  
Vol 23 (9) ◽  
pp. 161-172 ◽  
Author(s):  
Thomas J. LeBlanc ◽  
Michael L. Scott ◽  
Christopher M. Brown

2006 ◽  
Vol 14 (2) ◽  
pp. 129-156 ◽  
Author(s):  
Sin Man Cheang ◽  
Kwong Sak Leung ◽  
Kin Hong Lee

This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential programif required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.


Sign in / Sign up

Export Citation Format

Share Document