scholarly journals Harnessing the Department of Energy?s High-Performance Computing Expertise to Strengthen the U.S. Chemical Enterprise

2012 ◽  
Author(s):  
David A. Dixon ◽  
Michel Dupuis ◽  
Bruce C. Garrett ◽  
Jeffrey B. Neaton ◽  
Charity Plata ◽  
...  
Author(s):  
Levente Hajdu ◽  
Jérôme Lauret ◽  
Radomir A. Mihajlović

In this chapter, the authors discuss issues surrounding High Performance Computing (HPC)-driven science on the example of Peta science Monte Carlo experiments conducted at the Brookhaven National Laboratory (BNL), one of the US Department of Energy (DOE) High Energy and Nuclear Physics (HENP) research sites. BNL, hosting the only remaining US-based HENP experiments and apparatus, seem appropriate to study the nature of the High-Throughput Computing (HTC) hungry experiments and short historical development of the HPC technology used in such experiments. The development of parallel processors, multiprocessor systems, custom clusters, supercomputers, networked super systems, and hierarchical parallelisms are presented in an evolutionary manner. Coarse grained, rigid Grid system parallelism is contrasted by cloud computing, which is classified within this chapter as flexible and fine grained soft system parallelism. In the process of evaluating various high performance computing options, a clear distinction between high availability-bound enterprise and high scalability-bound scientific computing is made. This distinction is used to further differentiate cloud from the pre-cloud computing technologies and fit cloud computing better into the scientific HPC.


2020 ◽  
Vol 22 (6) ◽  
pp. 75-80
Author(s):  
James J. Hack ◽  
Michael E. Papka ◽  
James J. Hack ◽  
Michael E. Papka

1996 ◽  
Vol 5 (3) ◽  
pp. 239-249 ◽  
Author(s):  
Bill Appelbe ◽  
Donna Bergmark

Applications programming for high-performance computing is notoriously difficult. Al-though parallel programming is intrinsically complex, the principal reason why high-performance computing is difficult is the lack of effective software tools. We believe that the lack of tools in turn is largely due to market forces rather than our inability to design and build such tools. Unfortunately, the poor availability and utilization of parallel tools hurt the entire supercomputing industry and the U.S. high performance computing initiative which is focused on applications. A disproportionate amount of resources is being spent on faster hardware and architectures, while tools are being neglected. This article introduces a taxonomy of tools, analyzes the major factors that contribute to this situation, and suggests ways that the imbalance could be redressed and the likely evolution of tools.


Sign in / Sign up

Export Citation Format

Share Document