parallel supercomputers
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 0)

H-INDEX

14
(FIVE YEARS 0)

2019 ◽  
Vol 20 (S16) ◽  
Author(s):  
Satoshi Ito ◽  
Masaaki Yadome ◽  
Tatsuo Nishiki ◽  
Shigeru Ishiduki ◽  
Hikaru Inoue ◽  
...  

Abstract Background Supercomputers have become indispensable infrastructures in science and industries. In particular, most state-of-the-art scientific results utilize massively parallel supercomputers ranked in TOP500. However, their use is still limited in the bioinformatics field due to the fundamental fact that the asynchronous parallel processing service of Grid Engine is not provided on them. To encourage the use of massively parallel supercomputers in bioinformatics, we developed middleware called Virtual Grid Engine, which enables software pipelines to automatically perform their tasks as MPI programs. Result We conducted basic tests to check the time required to assign jobs to workers by VGE. The results showed that the overhead of the employed algorithm was 246 microseconds and our software can manage thousands of jobs smoothly on the K computer. We also tried a practical test in the bioinformatics field. This test included two tasks, the split and BWA alignment of input FASTQ data. 25,055 nodes (2,000,440 cores) were used for this calculation and accomplished it in three hours. Conclusion We considered that there were four important requirements for this kind of software, non-privilege server program, multiple job handling, dependency control, and usability. We carefully designed and checked all requirements. And this software fulfilled all the requirements and achieved good performance in a large scale analysis.


2018 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present new version of the Compact Modeling Framework (CMF3.0) developed for providing the software environment for stand-alone and coupled models of the Global geophysical fluids. The CMF3.0 designed for implementation high and ultra-high resolution models at massive-parallel supercomputers. The key features of the previous CMF version (2.0) are mentioned for reflecting progress in our researches. In the CMF3.0 pure MPI approach with high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with PGAS paradigm communications scheme, while central hub architecture evolves to the set of simultaneously working services. Performance tests for both versions are carried out. As addition a parallel realisation of the EnOI (Ensemble Optimal Interpolation) data assimilation method as program service of CMF3.0 is presented.


2017 ◽  
Vol 92 (6) ◽  
pp. 063001 ◽  
Author(s):  
Noritaka Shimizu ◽  
Takashi Abe ◽  
Michio Honma ◽  
Takaharu Otsuka ◽  
Tomoaki Togashi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document