scholarly journals Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

2006 ◽  
Vol 14 (2) ◽  
pp. 111-139 ◽  
Author(s):  
Andrea Lani ◽  
Tiago Quintino ◽  
Dries Kimpe ◽  
Herman Deconinck ◽  
Stefan Vandewalle ◽  
...  

Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods) that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

Author(s):  
Abirami. S ◽  
Shanmuga Priya. P

Cloud computing associate the computing and storage resources controlled by different operating systems to make available services such as large-scaled data storage and high performance computing to users. The benefits of low-cost, negligible management (from a user's perspective), and greater flexibility come with increased security concerns is one of the most crucial aspects among those prohibiting the wide-spread adoption of cloud computing. The data outsourced to a public cloud must need to be secured. This work gives Division and Replication of Data (DROPs) inside the Cloud for Optimal Performance and Security that judicially fragments user files into portions and replicates them at strategic places in the cloud. The division of a files into fragments is achieved based on a given consumer standards such that the individual fragments do not comprise any meaningful facts. The node separation is ensured by the means of the Grid Topology algorithm. To further improve the retrieval time, replicate fragments over the nodes that generate the highest read/write requests. The data encrypted using AES encryption algorithm. Duplication checking is implementing to provide efficient storage and time based access control for secure file access system.


2020 ◽  
Vol 245 ◽  
pp. 04035
Author(s):  
Martin Barisits ◽  
Mikhail Borodin ◽  
Alessandro Di Girolamo ◽  
Johannes Elmsheuser ◽  
Dmitry Golubkov ◽  
...  

The ATLAS experiment at CERN’s LHC stores detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide, currently in total about 200PB on disk and 250PB on tape. Data have different access characteristics due to various computational workflows, and can be accessed from different media, such as remote I/O, disk cache on hard disk drives or SSDs. Also, larger data centers provide the majority of offline storage capability via tape systems. For the HighLuminosity LHC (HL-LHC), the estimated data storage requirements are several factors bigger than the present forecast of available resources, based on a flat budget assumption. On the computing side, ATLAS Distributed Computing was very successful in the last years with high performance and high throughput computing integration and in using opportunistic computing resources for the Monte Carlo simulation. On the other hand, equivalent opportunistic storage does not exist. ATLAS started the Data Carousel project to increase the usage of less expensive storage, i.e. tapes or even commercial storage, so it is not limited to tape technologies exclusively. Data Carousel orchestrates data processing between workload management, data management, and storage services with the bulk data resident on offline storage. The processing is executed by staging and promptly processing a sliding window of inputs onto faster buffer storage, such that only a small percentage of input data are available at any one time. With this project, we aim to demonstrate that this is the natural way to dramatically reduce our storage cost. The first phase of the project was started in the fall of 2018 and was related to I/O tests of the sites archiving systems. Phase II now requires a tight integration of the workload and data management systems. Additionally, the Data Carousel studies the feasibility to run multiple computing workflows from tape. The project is progressing very well and the results presented in this document will be used before the LHC Run 3.


MRS Bulletin ◽  
2006 ◽  
Vol 31 (4) ◽  
pp. 324-328 ◽  
Author(s):  
Lisa Dhar

AbstractHolographic storage is considered a promising successor to currently available optical storage technologies. Enabling significant gains in both data transfer rates and storage densities, holographic storage and its capabilities have gained a great deal of recent attention.One of the primary challenges in the advancement of holographic storage has been the development of suitable recording materials.In this article, we provide a brief introduction to holographic storage and its potential advantages over current technologies, outline the requirements for recording materials, and survey candidate materials.We end by highlighting recent progress in photopolymer materials that has produced materials that satisfy the requirements for holographic storage and have enabled significant demonstrations of the viability of this technology.


Author(s):  
Ismail Akturk ◽  
Xinqi Wang ◽  
Tevfik Kosar

The unbounded increase in the size of data generated by scientific applications necessitates collaboration and sharing among the nation’s education and research institutions. Simply purchasing high-capacity, high-performance storage systems and adding them to the existing infrastructure of the collaborating institutions does not solve the underlying and highly challenging data handling problem. Scientists are compelled to spend a great deal of time and energy on solving basic data-handling issues, such as the physical location of data, how to access it, and/or how to move it to visualization and/or compute resources for further analysis. This chapter presents the design and implementation of a reliable and efficient distributed data storage system, PetaShare, which spans multiple institutions across the state of Louisiana. At the back-end, PetaShare provides a unified name space and efficient data movement across geographically distributed storage sites. At the front-end, it provides light-weight clients the enable easy, transparent, and scalable access. In PetaShare, the authors have designed and implemented an asynchronously replicated multi-master metadata system for enhanced reliability and availability. The authors also present a high level cross-domain metadata schema to provide a structured systematic view of multiple science domains supported by PetaShare.


Author(s):  
Tran Thanh Luong ◽  
Le My Canh

JavaScript has become more and more popular in recent years because its wealthy features as being dynamic, interpreted and object-oriented with first-class functions. Furthermore, JavaScript is designed with event-driven and I/O non-blocking model that boosts the performance of overall application especially in the case of Node.js. To take advantage of these characteristics, many design patterns that implement asynchronous programming for JavaScript were proposed. However, choosing a right pattern and implementing a good asynchronous source code is a challenge and thus easily lead into less robust application and low quality source code. Extended from our previous works on exception handling code smells in JavaScript and exception handling code smells in JavaScript asynchronous programming with promise, this research aims at studying the impact of three JavaScript asynchronous programming patterns on quality of source code and application.


Author(s):  
Maryam Hammami ◽  
Hatem Bellaaj

The Cloud storage is the most important issue today. This is due to a rapidly changing needs and a huge mass of varied and important data to back up. In this paper, we describe a work in progress and propose a flexible system architecture for data storage in the Cloud. This system is centered on the Data Manager module. This module provides various functions such as the dispersion of data in fragments, encryption and storage of fragments... etc. This architecture proves to be very relevant. It ensures consistency between different components. On the other hand, it ensures the security and availability of data.


Author(s):  
Jack Dongarra ◽  
Laura Grigori ◽  
Nicholas J. Higham

A number of features of today’s high-performance computers make it challenging to exploit these machines fully for computational science. These include increasing core counts but stagnant clock frequencies; the high cost of data movement; use of accelerators (GPUs, FPGAs, coprocessors), making architectures increasingly heterogeneous; and multi- ple precisions of floating-point arithmetic, including half-precision. Moreover, as well as maximizing speed and accuracy, minimizing energy consumption is an important criterion. New generations of algorithms are needed to tackle these challenges. We discuss some approaches that we can take to develop numerical algorithms for high-performance computational science, with a view to exploiting the next generation of supercomputers. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.


2021 ◽  
Vol 47 (2) ◽  
pp. 1-28
Author(s):  
Goran Flegar ◽  
Hartwig Anzt ◽  
Terry Cojean ◽  
Enrique S. Quintana-Ortí

The use of mixed precision in numerical algorithms is a promising strategy for accelerating scientific applications. In particular, the adoption of specialized hardware and data formats for low-precision arithmetic in high-end GPUs (graphics processing units) has motivated numerous efforts aiming at carefully reducing the working precision in order to speed up the computations. For algorithms whose performance is bound by the memory bandwidth, the idea of compressing its data before (and after) memory accesses has received considerable attention. One idea is to store an approximate operator–like a preconditioner–in lower than working precision hopefully without impacting the algorithm output. We realize the first high-performance implementation of an adaptive precision block-Jacobi preconditioner which selects the precision format used to store the preconditioner data on-the-fly, taking into account the numerical properties of the individual preconditioner blocks. We implement the adaptive block-Jacobi preconditioner as production-ready functionality in the Ginkgo linear algebra library, considering not only the precision formats that are part of the IEEE standard, but also customized formats which optimize the length of the exponent and significand to the characteristics of the preconditioner blocks. Experiments run on a state-of-the-art GPU accelerator show that our implementation offers attractive runtime savings.


Sign in / Sign up

Export Citation Format

Share Document