scholarly journals Scalable distributed memory embedded system with a low-cost hardware message passing interface

2009 ◽  
Vol 6 (12) ◽  
pp. 837-843 ◽  
Author(s):  
Ha-young Jeong ◽  
Won Hur ◽  
Yong-surk Lee
2005 ◽  
Vol 13 (2) ◽  
pp. 79-91 ◽  
Author(s):  
George A. Gravvanis ◽  
Konstantinos M. Giannoutakis

A new class of normalized explicit approximate inverse matrix techniques, based on normalized approximate factorization procedures, for solving sparse linear systems resulting from the finite difference discretization of partial differential equations in three space variables are introduced. A new parallel normalized explicit preconditioned conjugate gradient square method in conjunction with normalized approximate inverse matrix techniques for solving efficiently sparse linear systems on distributed memory systems, using Message Passing Interface (MPI) communication library, is also presented along with theoretical estimates on speedups and efficiency. The implementation and performance on a distributed memory MIMD machine, using Message Passing Interface (MPI) is also investigated. Applications on characteristic initial/boundary value problems in three dimensions are discussed and numerical results are given.


Author(s):  
J. G. Michopoulos ◽  
G. V. Zaruba

In the present paper we are describing the development of a portable computational cluster infrastructure based on Apple’s MacMini systems. The objective of this effort is to initiate the exploration of the feasibility for building and evaluating a low cost computational cluster system that could be portable due to the small form factor of the individual units while it could still provide considerable computational scalability. Here we outline our experiences in the form of a how-to methodology for setting up a MAC OS X 10.4 cluster for Xgrid and three popular versions of Message Passing Interface (MPI), i.e., OpenMPI, LAM-MPI, and MPICH2. Subsequently, we describe some performance measurements through the utilization of the Fire Dynamic Simulator (FDS) for two cases of reactive flow topologies. Finally, Xgrid with OpenMPI throughput and latency performance results are presented and discussed.


2020 ◽  
Vol 15 ◽  
Author(s):  
Weiwen Zhang ◽  
Long Wang ◽  
Theint Theint Aye ◽  
Juniarto Samsudin ◽  
Yongqing Zhu

Background: Genotype imputation as a service is developed to enable researchers to estimate genotypes on haplotyped data without performing whole genome sequencing. However, genotype imputation is computation intensive and thus it remains a challenge to satisfy the high performance requirement of genome wide association study (GWAS). Objective: In this paper, we propose a high performance computing solution for genotype imputation on supercomputers to enhance its execution performance. Method: We design and implement a multi-level parallelization that includes job level, process level and thread level parallelization, enabled by job scheduling management, message passing interface (MPI) and OpenMP, respectively. It involves job distribution, chunk partition and execution, parallelized iteration for imputation and data concatenation. Due to the design of multi-level parallelization, we can exploit the multi-machine/multi-core architecture to improve the performance of genotype imputation. Results: Experiment results show that our proposed method can outperform the Hadoop-based implementation of genotype imputation. Moreover, we conduct the experiments on supercomputers to evaluate the performance of the proposed method. The evaluation shows that it can significantly shorten the execution time, thus improving the performance for genotype imputation. Conclusion: The proposed multi-level parallelization, when deployed as an imputation as a service, will facilitate bioinformatics researchers in Singapore to conduct genotype imputation and enhance the association study.


Sign in / Sign up

Export Citation Format

Share Document