scholarly journals Multiscale modelling, simulation and computing: from the desktop to the exascale

Author(s):  
Alfons G. Hoekstra ◽  
Simon Portegies Zwart ◽  
Peter V. Coveney

This short contribution introduces a theme issue dedicated to ‘Multiscale modelling, simulation and computing: from the desktop to the exascale’. It holds a collection of articles presenting cutting-edge research in generic multiscale modelling and multiscale computing, and applications thereof on high-performance computing systems. The special issue starts with a position paper to discuss the paradigm of multiscale computing in the face of the emerging exascale, followed by a review and critical assessment of existing multiscale computing environments. This theme issue provides a state-of-the-art account of generic multiscale computing, as well as exciting examples of applications of such concepts in domains ranging from astrophysics, via material science and fusion, to biomedical sciences. This article is part of the theme issue ‘Multiscale modelling, simulation and computing: from the desktop to the exascale’.

Author(s):  
Alfons G. Hoekstra ◽  
Bastien Chopard ◽  
David Coster ◽  
Simon Portegies Zwart ◽  
Peter V. Coveney

In this position paper, we discuss two relevant topics: (i) generic multiscale computing on emerging exascale high-performing computing environments, and (ii) the scaling of such applications towards the exascale. We will introduce the different phases when developing a multiscale model and simulating it on available computing infrastructure, and argue that we could rely on it both on the conceptual modelling level and also when actually executing the multiscale simulation, and maybe should further develop generic frameworks and software tools to facilitate multiscale computing. Next, we focus on simulating multiscale models on high-end computing resources in the face of emerging exascale performance levels. We will argue that although applications could scale to exascale performance relying on weak scaling and maybe even on strong scaling, there are also clear arguments that such scaling may no longer apply for many applications on these emerging exascale machines and that we need to resort to what we would call multi-scaling . This article is part of the theme issue ‘Multiscale modelling, simulation and computing: from the desktop to the exascale’.


2019 ◽  
Author(s):  
I.A. Sidorov ◽  
T.V. Sidorova ◽  
Ya.V. Kurzibova

The high-performance computing systems include a large number of hardware and software components that can cause failures. Nowadays, the well-known approaches to monitoring and ensuring the fault tolerance of the high-performance computing systems do not allow to fully implement its integrated solution. The aim of this paper is to develop methods and tools for identifying abnormal situations during large-scale computational experiments in high-performance computing environments, localizing these malfunctions, automatically troubleshooting if this is possible, and automatically reconfiguring the computing environment otherwise. The proposed approach is based on the idea of integrating monitoring systems, used in different nodes of the environment, into a unified meta-monitoring system. The use of the proposed approach minimizes the time to perform diagnostics and troubleshooting through the use of parallel operations. It also improves the resiliency of the computing environment processes by preventive measures to diagnose and troubleshoot of failures. These advantages lead to increasing the reliability and efficiency of the environment functioning. The novelty of the proposed approach is underlined by the following elements: mechanisms of the decentralized collection, storage, and processing of monitoring data; a new technique of decision-making in reconfiguring the environment; the supporting the provision of fault tolerance and reliability not only for software and hardware, but also for environment management systems.


Author(s):  
Nikolay Kondratyuk ◽  
Vsevolod Nikolskiy ◽  
Daniil Pavlov ◽  
Vladimir Stegailov

Classical molecular dynamics (MD) calculations represent a significant part of the utilization time of high-performance computing systems. As usual, the efficiency of such calculations is based on an interplay of software and hardware that are nowadays moving to hybrid GPU-based technologies. Several well-developed open-source MD codes focused on GPUs differ both in their data management capabilities and in performance. In this work, we analyze the performance of LAMMPS, GROMACS and OpenMM MD packages with different GPU backends on Nvidia Volta and AMD Vega20 GPUs. We consider the efficiency of solving two identical MD models (generic for material science and biomolecular studies) using different software and hardware combinations. We describe our experience in porting the CUDA backend of LAMMPS to ROCm HIP that shows considerable benefits for AMD GPUs comparatively to the OpenCL backend.


Sign in / Sign up

Export Citation Format

Share Document