On Construction of a Multi-Grid Resource Selection Strategy on Grids

Author(s):  
Chao-Tung Yang ◽  
Wen-Jen Hu ◽  
Kuan-Chou Lai

Grid computing is now in widespread use, integrating geographical computing resources across multiple virtual organizations to achieve high performance computing. A single grid does not often provide a vast resource because virtual organizations have inadequate computing resource restrictions for management on an organizational scale. This paper presents a new grid architecture named Multi-Grid, which integrates multiple computational grids from different virtual organizations. This study builds a resource broker on multiple grid environments, integrating a number of single grids from different virtual organizations without the limit of organizations. The purpose of the multiple-grid resource is to avoid wasting resources. In addition, this study proposes a Multi-Grid Resource Selection Strategy (MRGSS) for the resource broker to better allocate resources before submitting jobs, to avoid network congestion that consequently causes a decrease in performance.

Author(s):  
Vytautas Jancauskas ◽  
Tomasz Piontek ◽  
Piotr Kopta ◽  
Bartosz Bosak

We describe a method for queue wait time prediction in supercomputing clusters. It was designed for use as a part of multi-criteria brokering mechanisms for resource selection in a multi-site High Performance Computing environment. The aim is to incorporate the time jobs stay queued in the scheduling system into the selection criteria. Our method can also be used by the end users to estimate the time to completion of their computing jobs. It uses historical data about the particular system to make predictions. It returns a list of probability estimates of the form ( t i ,  p i ), where p i is the probability that the job will start before time t i . Times t i can be chosen more or less freely when deploying the system. Compared to regression methods that only return a single number as a queue wait time estimate (usually without error bars) our prediction system provides more useful information. The probability estimates are calculated using the Bayes theorem with the naive assumption that the attributes describing the jobs are independent. They are further calibrated to make sure they are as accurate as possible, given available data. We describe our service and its REST API and the underlying methods in detail and provide empirical evidence in support of the method's efficacy. This article is part of the theme issue ‘Multiscale modelling, simulation and computing: from the desktop to the exascale’.


Author(s):  
Peter V Coveney

We introduce a definition of Grid computing which is adhered to throughout this Theme Issue. We compare the evolution of the World Wide Web with current aspirations for Grid computing and indicate areas that need further research and development before a generally usable Grid infrastructure becomes available. We discuss work that has been done in order to make scientific Grid computing a viable proposition, including the building of Grids, middleware developments, computational steering and visualization. We review science that has been enabled by contemporary computational Grids, and associated progress made through the widening availability of high performance computing.


Author(s):  
Dimosthenis Kyriazis ◽  
Andreas Menychtas ◽  
Konstantinos Tserpes ◽  
Theodoros Athanaileas ◽  
Theodora Varvarigou

A constantly increasing number of applications from various scientific fields are finding their way towards adopting Grid technologies in order to take advantage of their capabilities: the advent of Grid environments made feasible the solution of computational intensive problems in a reliable and cost-effective way. This book chapter focuses on presenting and describing how high performance computing in general and specifically Grids can be applied in biomedicine. The latter poses a number of requirements, both computational and sharing / networking ones. In this context, we will describe in detail how Grid environments can fulfill the aforementioned requirements. Furthermore, this book chapter includes a set of cases and scenarios of biomedical applications in Grids, in order to highlight the added-value of the distributed computing in the specific domain.


2011 ◽  
Vol 268-270 ◽  
pp. 1000-1000

Removed due to plagiarism.The original paper was published as: S. Saberi, P. Trunfio, D. Talia, M. Fesharaki, K. Badie, "Using Social Network and Semantic Overlay Network Approaches to Share Knowledge in Distributed Data Mining Scenarios". Proc. of the 8th Int. Conference on High Performance Computing and Simulation (HPCS 2010), Caen, France, pp. 536-544, IEEE Computer Society Press, June 2010. ISBN 978-1-4244-6827-0. DOI: 10.1109/HPCS.2010.5547080


2014 ◽  
pp. 77-81
Author(s):  
Chefi Triki ◽  
Lucio Grandinetti

In this paper we discuss the use computational grids to solve stochastic optimization problems. These problems are generally difficult to solve and are often characterized by a high number of variables and constraints. Furthermore, for some applications it is required to achieve a real-time solution. Obtaining reasonable results is a difficult objective without the use of high performance computing. Here we present a grid-enabled path-following algorithm and we discuss some experimental results.


2009 ◽  
Vol 17 (4) ◽  
pp. 545-560 ◽  
Author(s):  
Alexander Fölling ◽  
Christian Grimme ◽  
Joachim Lepping ◽  
Alexander Papaspyrou ◽  
Uwe Schwiegelshohn

In our work, we address the problem of workload distribution within a computational grid. In this scenario, users submit jobs to local high performance computing (HPC) systems which are, in turn, interconnected such that the exchange of jobs to other sites becomes possible. Providers are able to avoid local execution of jobs by offering them to other HPC sites. In our implementation, this distribution decision is made by a fuzzy system controller whose parameters can be adjusted to establish different exchange behaviors. In such a system, it is essential that HPC sites can only benefit if the workload is equitably (not necessarily equally) portioned among all participants. However, each site egoistically strives only for the minimization of its own jobs' response times regularly at the expense of other sites. This scenario is particularly suited for the application of a competitive coevolutionary algorithm: the fuzzy systems of the participating HPC sites are modeled as species that evolve in different populations while having to compete within the commonly shared ecosystem. Using real workload traces and grid setups, we show that opportunistic cooperation leads to significant improvements for each HPC site as well as for the overall system.


Sign in / Sign up

Export Citation Format

Share Document