opportunistic computing
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 1)

H-INDEX

9
(FIVE YEARS 0)

2020 ◽  
Vol 24 ◽  
pp. 100236 ◽  
Author(s):  
Anis Ur Rahman ◽  
Asad Waqar Malik ◽  
Vishwani Sati ◽  
Arpita Chopra ◽  
Sri Devi Ravana

2020 ◽  
Vol 1525 ◽  
pp. 012067
Author(s):  
M J Schnepf ◽  
R F von Cube ◽  
C Heidecker ◽  
M Fischer ◽  
M Giffels ◽  
...  

2020 ◽  
Vol 245 ◽  
pp. 07020
Author(s):  
Danele Spiga ◽  
Stefano Dal Pra ◽  
Davide Salomoni ◽  
Andrea Ceccanti ◽  
Roberto Alfieri

In the past couple of years, we have been actively developing the Dynamic On-Demand Analysis Service (DODAS) as an enabling technology to deploy container-based clusters over hybrid, private or public, Cloud infrastructures with almost zero effort. DODAS is particularly suitable for harvesting opportunistic computing resources; this is why several scientific communities already integrated their computing use cases into DODAS-instantiated clusters, automating the instantiation, management and federation of HTCondor batch systems. The increasing demand, availability and utilization of HPC resources by and for multidisciplinary user communities, often mandates the possibility to transparently integrate, manage and mix HTC and HPC resources. In this paper, we discuss our experience extending and using DODAS to connect HPC and HTC resources in the context of a distributed Italian regional infrastructure involving multiple sites and communities. In this use case, DODAS automatically generates HTCondor batch system on-demand. Moreover it dynamically and transparently federates sites that may also include HPC resources managed by SLURM; DODAS allows user workloads to make opportunistic and automated use of both HPC and HTC resources, thus effectively maximizing and optimizing resource utilization. We also report on our experience of using and federating HTCondor batch systems exploiting the JSON Web Token capabilities introduced in recent HTCondor versions, replacing the traditional X509 certificates in the whole chain of workload authorization. In this respect we also report on how we integrated HTCondor using OAuth with the INDIGO IAM service.


2020 ◽  
Vol 245 ◽  
pp. 04035
Author(s):  
Martin Barisits ◽  
Mikhail Borodin ◽  
Alessandro Di Girolamo ◽  
Johannes Elmsheuser ◽  
Dmitry Golubkov ◽  
...  

The ATLAS experiment at CERN’s LHC stores detector and simulation data in raw and derived data formats across more than 150 Grid sites world-wide, currently in total about 200PB on disk and 250PB on tape. Data have different access characteristics due to various computational workflows, and can be accessed from different media, such as remote I/O, disk cache on hard disk drives or SSDs. Also, larger data centers provide the majority of offline storage capability via tape systems. For the HighLuminosity LHC (HL-LHC), the estimated data storage requirements are several factors bigger than the present forecast of available resources, based on a flat budget assumption. On the computing side, ATLAS Distributed Computing was very successful in the last years with high performance and high throughput computing integration and in using opportunistic computing resources for the Monte Carlo simulation. On the other hand, equivalent opportunistic storage does not exist. ATLAS started the Data Carousel project to increase the usage of less expensive storage, i.e. tapes or even commercial storage, so it is not limited to tape technologies exclusively. Data Carousel orchestrates data processing between workload management, data management, and storage services with the bulk data resident on offline storage. The processing is executed by staging and promptly processing a sliding window of inputs onto faster buffer storage, such that only a small percentage of input data are available at any one time. With this project, we aim to demonstrate that this is the natural way to dramatically reduce our storage cost. The first phase of the project was started in the fall of 2018 and was related to I/O tests of the sites archiving systems. Phase II now requires a tight integration of the workload and data management systems. Additionally, the Data Carousel studies the feasibility to run multiple computing workflows from tape. The project is progressing very well and the results presented in this document will be used before the LHC Run 3.


Author(s):  
Ashutosh Pattnaik ◽  
Xulong Tang ◽  
Onur Kayiran ◽  
Adwait Jog ◽  
Asit Mishra ◽  
...  

2019 ◽  
Vol 11 (2) ◽  
pp. 33 ◽  
Author(s):  
Lionel Touseau ◽  
Nicolas Sommer

With the emergence of the Internet of Things, environmental sensing has been gaining interest, promising to improve agricultural practices by facilitating decision-making based on gathered environmental data (i.e., weather forecasting, crop monitoring, and soil moisture sensing). Environmental sensing, and by extension what is referred to as precision or smart agriculture, pose new challenges, especially regarding the collection of environmental data in the presence of connectivity disruptions, their gathering, and their exploitation by end-users or by systems that must perform actions according to the values of those collected data. In this paper, we present a middleware platform for the Internet of Things that implements disruption tolerant opportunistic networking and computing techniques, and that makes it possible to expose and manage physical objects through Web-based protocols, standards and technologies, thus providing interoperability between objects and creating a Web of Things (WoT). This WoT-based opportunistic computing approach is backed up by a practical experiment whose outcomes are presented in this article.


2019 ◽  
Vol 214 ◽  
pp. 07015 ◽  
Author(s):  
Wenjing Wu ◽  
David Cameron

Virtualization is a commonly used solution for utilizing the opportunistic computing resources in the HEP field, as it provides a unified software and OS layer that the HEP computing tasks require over the heterogeneous opportunistic computing resources. However there is always performance penalty with virtualization, especially for short jobs which are always the case for volunteer computing tasks, the overhead of virtualization reduces the CPU efficiency of the jobs, hence it leads to low CPU efficiency of the jobs. With the wide usage of containers in HEP computing, we explore the possibility of adopting the container technology into the ATLAS BOINC project, hence we implemented a Native version in BOINC, which uses the Singularity container or direct usage of the Operating System of the host machines to replace VirtualBox. In this paper, we will discuss 1) the implementation and workflow of the Native version in the ATLAS BOINC; 2) the performance measurement of the Native version comparing to the previous virtualization version. 3) the limits and shortcomings of the Native version; 4) The practice and outcome of the Native version which includes using it in backfilling the ATLAS Grid Tier2 sites and other clusters, and to utilize the idle computers from the CERN computing centre.


2019 ◽  
Vol 214 ◽  
pp. 03059
Author(s):  
Kenneth Herner ◽  
Andres Felipe Alba Hernandez ◽  
Shreyas Bhat ◽  
Dennis Box ◽  
Joeseph Boyd ◽  
...  

The FabrIc for Frontier Experiments (FIFE) project within the Fermilab Scientific Computing Division is charged with integrating offline computing components into a common computing stack for the non-LHC Fermilab experiments, supporting experiment offline computing, and consulting on new, novel workflows. We will discuss the general FIFE onboarding strategy, the upgrades and enhancements in the FIFE toolset, and plans for the coming year. These enhancements include: expansion of opportunistic computing resources (including GPU and high-performance computing resources) available to experiments; assistance with commissioning computing resources at European sites for individual experiments; StashCache repositories for experiments; enhanced job monitoring tools; and a custom workflow management service. Additionally we have completed the first phase of a Federated Identity Management system to make it easier for FIFE users to access Fermilab computing resources.


Sign in / Sign up

Export Citation Format

Share Document