scholarly journals Creating a New Generation of Software Development Environments, Compilers, and Algorithms for High-Performance Computing, Networks, and Data Management

2007 ◽  
Author(s):  
Robert A. van Engelen
Author(s):  
Reiner Anderl ◽  
Orkun Yaman

High Performance Computing (HPC) has become ubiquitous for simulations in the industrial context. To identify the requirements for integration of HPC-relevant data and processes a survey has been conducted concerning the German car manufacturers and service and component suppliers. This contribution presents the results of the evaluation and suggests an architecture concept to integrate data and workflows related with CAE and HPC-facilities in PLM. It describes the state of the art of HPC-applications within the simulation domain. Intensive efforts are currently invested on CAE-data management. However, an approach to systematic data management of HPC does not exist. This study states importance of an integrating approach for data management of HPC-applications and develops an architectural framework to implement HPC-data management into the existing PLM landscape. Requirements on key functionalities and interfaces are defined as well as a framework for a reference information model is conceptualized.


2019 ◽  
Vol 3 (4) ◽  
pp. 902-904
Author(s):  
Alexander Peyser ◽  
Sandra Diaz Pier ◽  
Wouter Klijn ◽  
Abigail Morrison ◽  
Jochen Triesch

Large-scale in silico experimentation depends on the generation of connectomes beyond available anatomical structure. We suggest that linking research across the fields of experimental connectomics, theoretical neuroscience, and high-performance computing can enable a new generation of models bridging the gap between biophysical detail and global function. This Focus Feature on ”Linking Experimental and Computational Connectomics” aims to bring together some examples from these domains as a step toward the development of more comprehensive generative models of multiscale connectomes.


2014 ◽  
Vol 9 (2) ◽  
pp. 17-27 ◽  
Author(s):  
Ritu Arora ◽  
Maria Esteva ◽  
Jessica Trelogan

The process of developing a digital collection in the context of a research project often involves a pipeline pattern during which data growth, data types, and data authenticity need to be assessed iteratively in relation to the different research steps and in the interest of archiving. Throughout a project’s lifecycle curators organize newly generated data while cleaning and integrating legacy data when it exists, and deciding what data will be preserved for the long term. Although these actions should be part of a well-oiled data management workflow, there are practical challenges in doing so if the collection is very large and heterogeneous, or is accessed by several researchers contemporaneously. There is a need for data management solutions that can help curators with efficient and on-demand analyses of their collection so that they remain well-informed about its evolving characteristics. In this paper, we describe our efforts towards developing a workflow to leverage open science High Performance Computing (HPC) resources for routinely and efficiently conducting data management tasks on large collections. We demonstrate that HPC resources and techniques can significantly reduce the time for accomplishing critical data management tasks, and enable a dynamic archiving throughout the research process. We use a large archaeological data collection with a long and complex formation history as our test case. We share our experiences in adopting open science HPC resources for large-scale data management, which entails understanding usage of the open source HPC environment and training users. These experiences can be generalized to meet the needs of other data curators working with large collections.


2019 ◽  
pp. 28-31
Author(s):  
E. V. Glivenko ◽  
S. А. Sorokin ◽  
G. N. Petrovа

The article is devoted to the design of high‑performance computing devices for parallel processing of information. The problem of  increasing the productivity of computing facilities by one or several orders of magnitude is considered on the example of the high‑ performance electronic computer M‑10, which was created in the 1970s at the NIIVK. If in a conventional computer, the method  of processing numbers is given by commands, then in M‑10, the methods for processing a function were specified by operators  taken from functional analysis. At the same time, the possibility of parallel processing of an entire information line appeared. Such  systems began to be called «functional operator type machines». The main ideas presented in the article may be of interest to  developers of specialized machines of the new generation, as well as engineers involved in the creation of high‑performance  computing devices using technologies of computing platforms.


Sign in / Sign up

Export Citation Format

Share Document