scholarly journals Large Scale Computing and Storage Requirements for High Energy Physics

2010 ◽  
Author(s):  
Richard A. Gerber ◽  
Harvey Wasserman
2020 ◽  
Vol 245 ◽  
pp. 07036
Author(s):  
Christoph Beyer ◽  
Stefan Bujack ◽  
Stefan Dietrich ◽  
Thomas Finnern ◽  
Martin Flemming ◽  
...  

DESY is one of the largest accelerator laboratories in Europe. It develops and operates state of the art accelerators for fundamental science in the areas of high energy physics, photon science and accelerator development. While for decades high energy physics (HEP) has been the most prominent user of the DESY compute, storage and network infrastructure, various scientific areas as science with photons and accelerator development have caught up and are now dominating the demands on the DESY infrastructure resources, with significant consequences for the IT resource provisioning. In this contribution, we will present an overview of the computational, storage and network resources covering the various physics communities on site. Ranging from high-throughput computing (HTC) batch-like offline processing in the Grid and the interactive user analyses resources in the National Analysis Factory (NAF) for the HEP community, to the computing needs of accelerator development or of photon sciences such as PETRA III or the European XFEL. Since DESY is involved in these experiments and their data taking, their requirements include fast low-latency online processing for data taking and calibration as well as offline processing, thus high-performance computing (HPC) workloads, that are run on the dedicated Maxwell HPC cluster. As all communities face significant challenges due to changing environments and increasing data rates in the following years, we will discuss how this will reflect in necessary changes to the computing and storage infrastructures. We will present DESY compute cloud and container orchestration plans as a basis for infrastructure and platform services. We will show examples of Jupyter notebooks for small scale interactive analysis, as well as its integration into large scale resources such as batch systems or Spark clusters. To overcome the fragmentation of the various resources for all scientific communities at DESY, we explore how to integrate them into a seamless user experience in an Interdisciplinary Data Analysis Facility.


2004 ◽  
Vol 13 (03) ◽  
pp. 391-502 ◽  
Author(s):  
MASSIMO GIOVANNINI

Cosmology, high-energy physics and astrophysics are today converging to the study of large scale magnetic fields. While the experimental evidence for the existence of large scale magnetization in galaxies, clusters and super-clusters is rather compelling, the origin of the phenomenon remains puzzling especially in light of the most recent observations. The purpose of the present review is to describe the physical motivations and the open theoretical problems related to the existence of large scale magnetic fields.


2005 ◽  
Vol 20 (14) ◽  
pp. 3021-3032
Author(s):  
Ian M. Fisk

In this review, the computing challenges facing the current and next generation of high energy physics experiments will be discussed. High energy physics computing represents an interesting infrastructure challenge as the use of large-scale commodity computing clusters has increased. The causes and ramifications of these infrastructure challenges will be outlined. Increasing requirements, limited physical infrastructure at computing facilities, and limited budgets have driven many experiments to deploy distributed computing solutions to meet the growing computing needs for analysis reconstruction, and simulation. The current generation of experiments have developed and integrated a number of solutions to facilitate distributed computing. The current work of the running experiments gives an insight into the challenges that will be faced by the next generation of experiments and the infrastructure that will be needed.


2005 ◽  
Vol 50 (S1) ◽  
pp. S116-S121 ◽  
Author(s):  
S. Ya. Beloglovsky ◽  
S. F. Burachas ◽  
N. A. Vassilieva ◽  
M. K. Ziomko ◽  
S. V. Lysov ◽  
...  

2017 ◽  
Vol 4 (6) ◽  
pp. 934-942

Abstract The Institute of High Energy Physics (IHEP), Chinese Academy of Sciences (CAS), is China's biggest laboratory for basic sciences. IHEP aims to understand the universe at the most fundamental level—from the smallest subatomic particles to the large-scale structure of the cosmos. As well as theoretical and experimental research into particle and astroparticle physics, IHEP has a broad range of research in related fields from accelerator technologies to nuclear analysis techniques. The Institute also provides beam facilities for researchers in other fields of sciences.


1977 ◽  
Vol 24 (1) ◽  
pp. 408-412 ◽  
Author(s):  
R. F. Althaus ◽  
F. A. Kirsten ◽  
K. L. Lee ◽  
S. R. Olson ◽  
L. J. Wagner ◽  
...  

2019 ◽  
Vol 214 ◽  
pp. 06007
Author(s):  
Malachi Schram ◽  
Nathan Tallent ◽  
Ryan Friese ◽  
Alok Singh ◽  
Ilkay Altintas

In this research, we investigated two approaches to detect job anomalies and/or contention for large scale computing efforts: 1. Preemptive job scheduling using binomial classification long short-term memory networks 2. Forecasting intra-node computing loads from the active jobs and additional job(s) For approach 1, we achieved a 14% improvement in computational resources utilization and an overall classification accuracy of 85% on real tasks executed in a High Energy Physics computing workflow. For this paper, we present the preliminary results used in second approach.


Sign in / Sign up

Export Citation Format

Share Document