Adaptive Processor Allocation with Estimated Job Execution Time in Heterogeneous Computing Grid

Author(s):  
Kuo-Chan Huang ◽  
Kuan-Po Lai ◽  
Hsi-Ya Chang
2013 ◽  
Vol 65 (2) ◽  
pp. 886-902 ◽  
Author(s):  
Hong Jun Choi ◽  
Dong Oh Son ◽  
Seung Gu Kang ◽  
Jong Myon Kim ◽  
Hsien-Hsin Lee ◽  
...  

2020 ◽  
Vol 245 ◽  
pp. 05037
Author(s):  
Caterina Marcon ◽  
Oxana Smirnova ◽  
Servesh Muralidharan

Experimental observations and advanced computer simulations in High Energy Physics (HEP) paved the way for the recent discoveries at the Large Hadron Collider (LHC) at CERN. Currently, Monte Carlo simulations account for a very significant amount of computational resources of the Worldwide LHC Computing Grid (WLCG). The current growth in available computing performance will not be enough to fulfill the expected demand for the forthcoming High Luminosity run (HL-LHC). More efficient simulation codes are therefore required. This study focuses on evaluating the impact of different build methods on the simulation execution time. The Geant4 toolkit, the standard simulation code for the LHC experiments, consists of a set of libraries which can be either dynamically or statically linked to the simulation executable. Dynamic libraries are currently the preferred build method. In this work, three versions of the GCC compiler, namely 4.8.5, 6.2.0 and 8.2.0 have been used. In addition, a comparison between four optimization levels (Os, O1, O2 and O3) has also been performed. Static builds for all the GCC versions considered, exhibit a reduction in execution times of about 10%. Switching to newer GCC version results in an average of 30% improvement in the execution time regardless of the build type. In particular, a static build with GCC 8.2.0 leads to an improvement of about 34% with respect to the default configuration (GCC 4.8.5, dynamic, O2). The different GCC optimization flags do not affect the execution times.


2012 ◽  
pp. 1149-1174
Author(s):  
Xiaoyu Yang ◽  
Gen-Tao Chiang

It will become increasingly popular that scientists in research institutes will make use of Grid computing resources for running computer simulations and managing data. Although there are some production Grids available, it is often the case that many organizations and research projects need to build their own Grids. However, building Grid infrastructure is not a trivial job as it involves sharing and managing heterogeneous computing and data resources across different organizations, and involves installing many specific software packages and various middleware. This can be quite complicated and time-consuming. Building a Grid infrastructure also requires good knowledge and understanding of distributed computing, parallel computing and Grid technologies. Apart from building physical Grid, how to build a user infrastructure that can facilitate the use of and easy access to these physical infrastructures is also a challenging task. In this chapter, the authors summarize some hands-on experience in building an institutional Grid infrastructure. They describe knowledge and experience obtained in the installations of Condor pools, PBS clusters, Globus Toolkit, and SRB (Storage Resource Broker) for computing Grid and data Grid. The authors also propose to use a User-Centered Design (UCD) approach to develop a Grid user infrastructure which can facilitate the use of the Grid to improve the usability.


Sign in / Sign up

Export Citation Format

Share Document