scholarly journals Using the World Wide Web to provide a platform independent interface to high performance computing

Author(s):  
D.W. Robertson ◽  
W.E. Johnston
2021 ◽  
Vol 251 ◽  
pp. 02039
Author(s):  
Michael Böhler ◽  
René Caspart ◽  
Max Fischer ◽  
Oliver Freyermuth ◽  
Manuel Giffels ◽  
...  

The inclusion of opportunistic resources, for example from High Performance Computing (HPC) centers or cloud providers, is an important contribution to bridging the gap between existing resources and future needs by the LHC collaborations, especially for the HL-LHC era. However, the integration of these resources poses new challenges and often needs to happen in a highly dynamic manner. To enable an effective and lightweight integration of these resources, the tools COBalD and TARDIS are developed at KIT. In this contribution we report on the infrastructure we use to dynamically offer opportunistic resources to collaborations in the World Wide LHC Computing Grid (WLCG). The core components are COBalD/TARDIS, HTCondor, CVMFS and modern virtualization technology. The challenging task of managing the opportunistic resources is performed by COBalD/TARDIS. We showcase the challenges, employed solutions and experiences gained with the provisioning of opportunistic resources from several resource providers like university clusters, HPC centers and cloud setups in a multi VO environment. This work can serve as a blueprint for approaching the provisioning of resources from other resource providers.


2020 ◽  
Vol 71 (3) ◽  
pp. 263-267
Author(s):  
М. Serik ◽  
◽  
G. Zh. Yerlanova ◽  

At present, along with the dynamic development of computer technology in the world, the most effective ways of solving problems of practical importance are being considered. High performance computing takes the lead in this. Therefore, the development of modern society is closely related to the training of experienced, modern specialists in the field of information technology. This, in turn, depends on the inclusion of new courses in the curriculum and full coverage of these issues in the content of the taught courses. This article analyzes the courses on high performance computing, taught at experimental bases and abroad, on the basis of this, the topics of the special course and the content recommended for implementation in the educational process are determined. During the training, the competencies of students in high performance computing were identified.


Author(s):  
Peter V Coveney

We introduce a definition of Grid computing which is adhered to throughout this Theme Issue. We compare the evolution of the World Wide Web with current aspirations for Grid computing and indicate areas that need further research and development before a generally usable Grid infrastructure becomes available. We discuss work that has been done in order to make scientific Grid computing a viable proposition, including the building of Grids, middleware developments, computational steering and visualization. We review science that has been enabled by contemporary computational Grids, and associated progress made through the widening availability of high performance computing.


Author(s):  
Geetha J. ◽  
Uday Bhaskar N ◽  
Chenna Reddy P.

Data intensive systems aim to efficiently process “big” data. Several data processing engines have evolved over past decade. These data processing engines are modeled around the MapReduce paradigm. This article explores Hadoop's MapReduce engine and propose techniques to obtain a higher level of optimization by borrowing concepts from the world of High Performance Computing. Consequently, power consumed and heat generated is lowered. This article designs a system with a pipelined dataflow in contrast to the existing unregulated “bursty” flow of network traffic, the ability to carry out both Map and Reduce tasks in parallel, and a system which incorporates modern high-performance computing concepts using Remote Direct Memory Access (RDMA). To establish the claim of an increased performance measure of the proposed system, the authors provide an algorithm for RoCE enabled MapReduce and a mathematical derivation contrasting the runtime of vanilla Hadoop. This article proves mathematically, that the proposed system functions 1.67 times faster than the vanilla version of Hadoop.


Author(s):  
А.С. Антонов ◽  
И.В. Афанасьев ◽  
Вл.В. Воеводин

В данной статье представлен обзор современного состояния суперкомпьютерной техники. Обзор сделан с разных точек зрения — начиная от особенностей построения современных вычислительных устройств до особенностей архитектуры больших суперкомпьютерных комплексов. В данный обзор вошли описания самых мощных суперкомпьютеров мира и России по состоянию на начало 2021 г., а также некоторых менее мощных систем, интересных с других точек зрения. Также делается акцент на тенденциях развития суперкомпьютерной отрасли и описываются наиболее известные проекты построения будущих экзафлопсных суперкомпьютеров. This paper provides an overview of the current state of supercomputer technology. The review is done from different points of view — from the construction features of modern computing devices to the features of the architecture of large supercomputer complexes. This review includes descriptions of the most powerful supercomputers in the world and Russia since the early of 2021 as well as some less powerful systems that are interesting from other points of view. It also focuses on the development trends of the supercomputer industry and describes the most famous projects for building future exascale supercomputers.


2009 ◽  
Author(s):  
Blair Williams Cronin ◽  
Ty Tedmon-Jones ◽  
Lora Wilson Mau

Sign in / Sign up

Export Citation Format

Share Document