scholarly journals A Vertex-Aligned Model for Packing 4-Hexagonal Clusters in a Regular Hexagonal Container

Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 700
Author(s):  
Marina Prvan ◽  
Arijana Burazin Mišura ◽  
Zoltan Gecse ◽  
Julije Ožegović

This paper deals with a problem the packing polyhex clusters in a regular hexagonal container. It is a common problem in many applications with various cluster shapes used, but symmetric polyhex is the most useful in engineering due to its geometrical properties. Hence, we concentrate on mathematical modeling in such an application, where using the “bee” tetrahex is chosen for the new Compact Muon Solenoid (CMS) design upgrade, which is one of four detectors used in Large Hadron Collider (LHC) experiment at European Laboratory for Particle Physics (CERN). We start from the existing hexagonal containers with hexagonal cells packed inside, and uniform clustering applied. We compare the center-aligned (CA) and vertex-aligned (VA) models, analyzing cluster rotations providing the increased packing efficiency. We formally describe the geometrical properties of clustering approaches and show that cluster sharing is inevitable at the container border with uniform clustering. In addition, we propose a new vertex-aligned model decreasing the number of shared clusters in the uniform scenario, but with a smaller number of clusters contained inside the container. Also, we describe a non-uniform tetrahex cluster packing scheme in the proposed container model. With the proposed cluster packing solution, it is accomplished that all clusters are contained inside the container region. Since cluster-sharing is completely avoided at the container border, the maximal packing efficiency is obtained compared to the existing models.

2006 ◽  
Vol 37 (1) ◽  
pp. 67-80 ◽  
Author(s):  
Pierre Bonnal ◽  
Jurgen De Jonghe ◽  
John Ferguson

The Large Hadron Collider (LHC) is under construction at CERN, the European Laboratory for Particle Physics, near Geneva, Switzerland. In 2003, a new earned value management (EVM) system was introduced to improve transparency in LHC project reporting, to allow a clearer distinction between cost differences to the baseline due to overruns versus resulting delays, and to provide the project management team with a more reactive project management information system for better decision-making. EVM has become a de facto standard for the follow-up of cost and schedule and several commercial packages are offered for implementing an EVM system. But because none of these packages fulfilled CERN's requirements, its executive management decided to proceed with an in-house development. In this paper, an overview of what CERN considers to be good requirements for an EVM system suited to large-scale projects is provided: the deliverable-oriented, collaborative and lean management dimensions are enforced. In conclusion, we discuss some of our positive and negative experiences so those who would like to develop or implement similar enterprise-wide project control systems can be more aware of common pitfalls.


2009 ◽  
Vol 19 ◽  
pp. 28-35
Author(s):  
Edgar Casimiro ◽  
Marco A. Reyes ◽  
Gerardo Moreno ◽  
David Delepine

The Compact Muon Solenoid experiment at the CERN Large Hadron Collider will study protonproton collisions at unprecedented energies and luminosities. In this article we providefi rst a brief general introduction to particle physics. We then explain what CERN is. Thenwe describe the Large Hadron Collider at CERN, the most powerful particle acceleratorever built. Finally we describe the Compact Muon Solenoid experiment, its physics goals,construction details, and current status.


1999 ◽  
Vol 7 (1) ◽  
pp. 77-92 ◽  
Author(s):  
C. H. Llewellyn Smith

here is a long and beneficial tradition of international collaboration in science and technology. There are, however, trends working against collaboration, and tensions between (for example) collaboration and competition, and European integration and increasing emphasis on national competitiveness. It is therefore important to have a clear understanding of when and in what form international collaboration is desirable. This paper considers these issues, drawing lessons from CERN – the European Laboratory for Particle Physics. CERN, which pioneered European collaboration, is now becoming in a sense a world organization. Physicists from 47 countries will participate in experiments at CERN's next project, the Large Hadron Collider (LHC), which is set to be the first megascience project constructed by a global partnership, driven ‘bottom up’ by the scientists involved. CERN's experience with the LHC could provide an excellent precedent for other projects.


Author(s):  
Jos Engelen

In this paper, I present a view of organizational and financial matters relevant for the successful construction and operation of the experimental set-ups at the Large Hadron Collider of CERN, the European Laboratory for Particle Physics in Geneva. Construction of these experiments was particularly challenging: new detector technologies had to be developed; experimental set-ups that are larger and more complex than ever before had to be constructed; and larger collaborations than ever before had to be organized. Fundamental to the success were: the ‘reference’ provided by CERN, peer review, signed memoranda of understanding, well-organized resources review boards as an interface to the national funding agencies and collegial, but solidly organized, experimental collaborations.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Christian Ariza-Porras ◽  
Valentin Kuznetsov ◽  
Federica Legger

AbstractThe globally distributed computing infrastructure required to cope with the multi-petabyte datasets produced by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at CERN comprises several subsystems, such as workload management, data management, data transfers, and submission of users’ and centrally managed production requests. To guarantee the efficient operation of the whole infrastructure, CMS monitors all subsystems according to their performance and status. Moreover, we track key metrics to evaluate and study the system performance over time. The CMS monitoring architecture allows both real-time and historical monitoring of a variety of data sources. It relies on scalable and open source solutions tailored to satisfy the experiment’s monitoring needs. We present the monitoring data flow and software architecture for the CMS distributed computing applications. We discuss the challenges, components, current achievements, and future developments of the CMS monitoring infrastructure.


2021 ◽  
Vol 251 ◽  
pp. 02054
Author(s):  
Olga Sunneborn Gudnadottir ◽  
Daniel Gedon ◽  
Colin Desmarais ◽  
Karl Bengtsson Bernander ◽  
Raazesh Sainudiin ◽  
...  

In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.


2021 ◽  
Vol 9 ◽  
Author(s):  
N. Demaria

The High Luminosity Large Hadron Collider (HL-LHC) at CERN will constitute a new frontier for the particle physics after the year 2027. Experiments will undertake a major upgrade in order to stand this challenge: the use of innovative sensors and electronics will have a main role in this. This paper describes the recent developments in 65 nm CMOS technology for readout ASIC chips in future High Energy Physics (HEP) experiments. These allow unprecedented performance in terms of speed, noise, power consumption and granularity of the tracking detectors.


Author(s):  
Alexandros Ioannidis-Pantopikos ◽  
Donat Agosti

In the landscape of general-purpose repositories, Zenodo was built at the European Laboratory for Particle Physics' (CERN) data center to facilitate the sharing and preservation of the long tail of research across all disciplines and scientific domains. Given Zenodo’s long tradition of making research artifacts FAIR (Findable, Accessible, Interoperable, and Reusable), there are still challenges in applying these principles effectively when serving the needs of specific research domains. Plazi’s biodiversity taxonomic literature processing pipeline liberates data from publications, making it FAIR via extensive metadata, the minting of a DataCite Digital Object Identifier (DOI), a licence and both human- and machine-readable output provided by Zenodo, and accessible via the Biodiversity Literature Repository community at Zenodo. The deposits (e.g., taxonomic treatments, figures) are an example of how local networks of information can be formally linked to explicit resources in a broader context of other platforms like GBIF (Global Biodiversity Information Facility). In the context of biodiversity taxonomic literature data workflows, a general-purpose repository’s traditional submission approach is not enough to preserve rich metadata and to capture highly interlinked objects, such as taxonomic treatments and digital specimens. As a prerequisite to serve these use cases and ensure that the artifacts remain FAIR, Zenodo introduced the concept of custom metadata, which allows enhancing submissions such as figures or taxonomic treatments (see as an example the treatment of Eurygyrus peloponnesius) with custom keywords, based on terms from common biodiversity vocabularies like Darwin Core and Audubon Core and with an explicit link to the respective vocabulary term. The aforementioned pipelines and features are designed to be served first and foremost using public Representational State Transfer Application Programming Interfaces (REST APIs) and open web technologies like webhooks. This approach allows researchers and platforms to integrate existing and new automated workflows into Zenodo and thus empowers research communities to create self-sustained cross-platform ecosystems. The BiCIKL project (Biodiversity Community Integrated Knowledge Library) exemplifies how repositories and tools can become building blocks for broader adoption of the FAIR principles. Starting with the above literature processing pipeline, the concepts of and resulting FAIR data, with a focus on the custom metadata used to enhance the deposits, will be explained.


2018 ◽  
Vol 68 (1) ◽  
pp. 429-459 ◽  
Author(s):  
Antonio Boveia ◽  
Caterina Doglioni

Colliders, among the most successful tools of particle physics, have revealed much about matter. This review describes how colliders contribute to the search for particle dark matter, focusing on the highest-energy collider currently in operation, the Large Hadron Collider (LHC) at CERN. In the absence of hints about the character of interactions between dark matter and standard matter, this review emphasizes what could be observed in the near future, presents the main experimental challenges, and discusses how collider searches fit into the broader field of dark matter searches. Finally, it highlights a few areas to watch for the future LHC program.


Sign in / Sign up

Export Citation Format

Share Document