scholarly journals Time-critical database conditions data-handling for the CMS experiment

Author(s):  
Michele De Gruttola ◽  
Salvatore Di Guida ◽  
Vincenzo Innocente ◽  
Antonio Pierro
2011 ◽  
Vol 58 (4) ◽  
pp. 1460-1464 ◽  
Author(s):  
Michele De Gruttola ◽  
Salvatore Di Guida ◽  
Vincenzo Innocente ◽  
Antonio Pierro

2011 ◽  
Vol 331 (4) ◽  
pp. 042007 ◽  
Author(s):  
Francesca Cavallari ◽  
Michele de Gruttola ◽  
Salvatore Di Guida ◽  
Giacomo Govi ◽  
Vincenzo Innocente ◽  
...  

2015 ◽  
Vol 664 (8) ◽  
pp. 082009 ◽  
Author(s):  
J-M Andre ◽  
A Andronidis ◽  
U Behrens ◽  
J Branson ◽  
O Chaze ◽  
...  

2020 ◽  
Vol 245 ◽  
pp. 09007
Author(s):  
Carles Acosta-Silva ◽  
Antonio Delgado Peris ◽  
José Flix Molina ◽  
Jaime Frey ◽  
José M. Hernández ◽  
...  

In view of the increasing computing needs for the HL-LHC era, the LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing and making efficient use of Cloud and High Performance Computing (HPC) resources present a diversity of challenges for the CMS experiment. In particular, network limitations at the compute nodes in HPC centers prevent CMS pilot jobs to connect to its central HTCondor pool in order to receive payload jobs to be executed. To cope with this limitation, new features have been developed in both HTCondor and the CMS resource acquisition and workload management infrastructure. In this novel approach, a bridge node is set up outside the HPC center and the communications between HTCondor daemons are relayed through a shared file system. This conforms the basis of the CMS strategy to enable the exploitation of the Barcelona Supercomputing Center (BSC) resources, the main Spanish HPC site. CMS payloads are claimed by HTCondor condor_startd daemons running at the nearby PIC Tier-1 center and routed to BSC compute nodes through the bridge. This fully enables the connectivity of CMS HTCondor-based central infrastructure to BSC resources via the PIC HTCondor pool. Other challenges include building custom singularity images with CMS software releases, bringing conditions data to payload jobs, and custom data handling between BSC and PIC. This report describes the initial technical prototype, its deployment and tests, and future steps. A key aspect of the technique described in this contribution is that it could be universally employed in similar network-restrictive HPC environments elsewhere.


2020 ◽  
Vol 245 ◽  
pp. 01024
Author(s):  
Chiara Rovelli

The CMS experiment at the LHC features an electromagnetic calorimeter (ECAL) made of lead tungstate scintillating crystals. The ECAL energy response is fundamental for both triggering purposes and offline analysis. Due to the challenging LHC radiation environment, the response of both crystals and photodetectors to particles evolves with time. Therefore continuous monitoring and correction of the ageing effects are crucial. Fast, reliable and efficient workflows are set up to have a first set of corrections computed within 48 hours from data-taking, making use of dedicated data streams and processing. Such corrections, stored in relational databases, are then accessed during the prompt offline reconstruction of the CMS data. Twice a week, the calibrations used in the trigger are also updated in the database and accessed during the data-taking. In this note, the design of the CMS ECAL data handling and processing is reviewed.


1980 ◽  
Vol 19 (01) ◽  
pp. 37-41
Author(s):  
R. F. Woolson ◽  
M. T. Tsuang ◽  
L. R. Urban

We are now conducting a forty-year follow-up and family study of 200 schizophrenics, 325 manic-depressives and 160 surgical controls. This study began in 1973 and has continued to the present date. Numerous data handling and data management decisions were made in the course of collecting the data for the project. In this report some of the practical difficulties in the data handling and computer management of such large and bulky data sets are enumerated.


2019 ◽  
Author(s):  
Michaela Bonfert ◽  
Claire Andonian ◽  
Christoph Bidlingmaier ◽  
Claudia Berlin ◽  
Ingo Borggraefe ◽  
...  

TAPPI Journal ◽  
2019 ◽  
Vol 18 (11) ◽  
pp. 679-689
Author(s):  
CYDNEY RECHTIN ◽  
CHITTA RANJAN ◽  
ANTHONY LEWIS ◽  
BETH ANN ZARKO

Packaging manufacturers are challenged to achieve consistent strength targets and maximize production while reducing costs through smarter fiber utilization, chemical optimization, energy reduction, and more. With innovative instrumentation readily accessible, mills are collecting vast amounts of data that provide them with ever increasing visibility into their processes. Turning this visibility into actionable insight is key to successfully exceeding customer expectations and reducing costs. Predictive analytics supported by machine learning can provide real-time quality measures that remain robust and accurate in the face of changing machine conditions. These adaptive quality “soft sensors” allow for more informed, on-the-fly process changes; fast change detection; and process control optimization without requiring periodic model tuning. The use of predictive modeling in the paper industry has increased in recent years; however, little attention has been given to packaging finished quality. The use of machine learning to maintain prediction relevancy under everchanging machine conditions is novel. In this paper, we demonstrate the process of establishing real-time, adaptive quality predictions in an industry focused on reel-to-reel quality control, and we discuss the value created through the availability and use of real-time critical quality.


2008 ◽  
Author(s):  
Christopher J. Westren ◽  
Lester Ian Clark ◽  
Azam Zreik ◽  
Ben Ersan ◽  
Chad Jurica

Sign in / Sign up

Export Citation Format

Share Document