scholarly journals Low Temperature Characterization and Modeling of FDSOI Transistors for Cryo CMOS Applications

2021 ◽  
Author(s):  
Mikaël Cassé ◽  
Gérard Ghibaudo

The wide range of cryogenic applications, such as spatial, high performance computing or high-energy physics, has boosted the investigation of CMOS technology performance down to cryogenic temperatures. In particular, the readout electronics of quantum computers operating at low temperature requires larger bandwidth than spatial applications, so that advanced CMOS node has to be considered. FDSOI technology appears as a valuable solution for co-integration between qubits and consistent engineering of control and read-out. However, there is still lack of reports on literature concerning advanced CMOS nodes behavior at deep cryogenic operation, from devices electrostatics to mismatch and self-heating, all requested for the development of robust design tools. For these reasons, this chapter presents a review of electrical characterization and modeling results recently obtained on ultra-thin film FDSOI MOSFETs down to 4.2 K.

2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Florin Pop

Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.


2019 ◽  
Vol 214 ◽  
pp. 08004 ◽  
Author(s):  
R. Du ◽  
J. Shi ◽  
J. Zou ◽  
X. Jiang ◽  
Z. Sun ◽  
...  

There are two production clusters co-existed in the Institute of High Energy Physics (IHEP). One is a High Throughput Computing (HTC) cluster with HTCondor as the workload manager, the other is a High Performance Computing (HPC) cluster with Slurm as the workload manager. The resources of the HTCondor cluster are funded by multiple experiments, and the resource utilization reached more than 90% by adopting a dynamic resource share mechanism. Nevertheless, there is a bottleneck if more resources are requested by multiple experiments at the same moment. On the other hand, parallel jobs running on the Slurm cluster reflect some specific attributes, such as high degree of parallelism, low quantity and long wall time. Such attributes make it easy to generate free resource slots which are suitable for jobs from the HTCondor cluster. As a result, if there is a mechanism to schedule jobs from the HTCon-dor cluster to the Slurm cluster transparently, it would improve the resource utilization of the Slurm cluster, and reduce job queue time for the HTCondor cluster. In this proceeding, we present three methods to migrate HTCondor jobs to the Slurm cluster, and concluded that HTCondor-C is more preferred. Furthermore, because design philosophy and application scenes are di↵erent between HTCondor and Slurm, some issues and possible solutions related with job scheduling are presented.


2020 ◽  
Vol 245 ◽  
pp. 05012
Author(s):  
Venkitesh Ayyar ◽  
Wahid Bhimji ◽  
Maria Elena Monzani ◽  
Andrew Naylor ◽  
Simon Patton ◽  
...  

High Energy Physics experiments like the LUX-ZEPLIN dark matter experiment face unique challenges when running their computation on High Performance Computing resources. In this paper, we describe some strategies to optimize memory usage of simulation codes with the help of profiling tools. We employed this approach and achieved memory reduction of 10-30%. While this has been performed in the context of the LZ experiment, it has wider applicability to other HEP experimental codes that face these challenges on modern computer architectures.


2021 ◽  
Vol 251 ◽  
pp. 03033
Author(s):  
Micah Groh ◽  
Norman Buchanan ◽  
Derek Doyle ◽  
James B. Kowalkowski ◽  
Marc Paterno ◽  
...  

Modern experiments in high energy physics analyze millions of events recorded in particle detectors to select the events of interest and make measurements of physics parameters. These data can often be stored as tabular data in files with detector information and reconstructed quantities. Most current techniques for event selection in these files lack the scalability needed for high performance computing environments. We describe our work to develop a high energy physics analysis framework suitable for high performance computing. This new framework utilizes modern tools for reading files and implicit data parallelism. Framework users analyze tabular data using standard, easy-to-use data analysis techniques in Python while the framework handles the file manipulations and parallelism without the user needing advanced experience in parallel programming. In future versions, we hope to provide a framework that can be utilized on a personal computer or a high performance computing cluster with little change to the user code.


Author(s):  
Jeremy Cohen ◽  
Ioannis Filippis ◽  
Mark Woodbridge ◽  
Daniela Bauer ◽  
Neil Chue Hong ◽  
...  

Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.


Instruments ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 8 ◽  
Author(s):  
Lucio Rossi ◽  
Carmine Senatore

In view of the preparation for a post-LHC collider, in 2010 the high-energy physics (HEP) community started to discuss various options, including the use of HTS for very high-field dipoles. Therefore, a small program was begun in Europe that aimed at exploring the possibility of using HTS for accelerator-quality magnets. Based on various EU-funded programs, though at modest levels, it has enabled the European community of accelerator magnet research to start getting experience in HTS and address a few issues. The program was based on the use of REBa2Cu3O7−x (REBCO) tapes to form 10 kA Roebel cables to wind small dipoles of 30–40 mm aperture in the 5 T range. The dipoles are designed to be later inserted in a background dipole field (in Nb3Sn), to reach eventually a field level in the 16–20 T range, beyond the reach of Low Temperature Superconductors (LTS). The program is currently underway: more than 1 km of high-performance tape (Je > 500 A/mm2 at 20 T, 4.2 K) has been manufactured and characterized, various 30 m long Roebel cables have been assembled and validated up to 13 kA, a few dipoles have been wound and tested, reaching 4.5 T in stand-alone (while a dipole made from flat race track coils exceeded 5 T using stacked tape cable), and tests in background field are being organized.


Proceedings ◽  
2020 ◽  
Vol 65 (1) ◽  
pp. 25
Author(s):  
Antonio Garrido Marijuan ◽  
Roberto Garay ◽  
Mikel Lumbreras ◽  
Víctor Sánchez ◽  
Olga Macias ◽  
...  

District heating networks deliver around 13% of the heating energy in the EU, being considered as a key element of the progressive decarbonization of Europe. The H2020 REnewable Low TEmperature District project (RELaTED) seeks to contribute to the energy decarbonization of these infrastructures through the development and demonstration of the following concepts: reduction in network temperature down to 50 °C, integration of renewable energies and waste heat sources with a novel substation concept, and improvement on building-integrated solar thermal systems. The coupling of renewable thermal sources with ultra-low temperature district heating (DH) allows for a bidirectional energy flow, using the DH as both thermal storage in periods of production surplus and a back-up heating source during consumption peaks. The ultra-low temperature enables the integration of a wide range of energy sources such as waste heat from industry. Furthermore, RELaTED also develops concepts concerning district heating-connected reversible heat pump systems that allow to reach adequate thermal levels for domestic hot water as well as the use of the network for district cooling with high performance. These developments will be demonstrated in four locations: Estonia, Serbia, Denmark, and Spain.


2019 ◽  
Vol 214 ◽  
pp. 08009 ◽  
Author(s):  
Matthias J. Schnepf ◽  
R. Florian von Cube ◽  
Max Fischer ◽  
Manuel Giffels ◽  
Christoph Heidecker ◽  
...  

Demand for computing resources in high energy physics (HEP) shows a highly dynamic behavior, while the provided resources by the Worldwide LHC Computing Grid (WLCG) remains static. It has become evident that opportunistic resources such as High Performance Computing (HPC) centers and commercial clouds are well suited to cover peak loads. However, the utilization of these resources gives rise to new levels of complexity, e.g. resources need to be managed highly dynamically and HEP applications require a very specific software environment usually not provided at opportunistic resources. Furthermore, aspects to consider are limitations in network bandwidth causing I/O-intensive workflows to run inefficiently. The key component to dynamically run HEP applications on opportunistic resources is the utilization of modern container and virtualization technologies. Based on these technologies, the Karlsruhe Institute of Technology (KIT) has developed ROCED, a resource manager to dynamically integrate and manage a variety of opportunistic resources. In combination with ROCED, HTCondor batch system acts as a powerful single entry point to all available computing resources, leading to a seamless and transparent integration of opportunistic resources into HEP computing. KIT is currently improving the resource management and job scheduling by focusing on I/O requirements of individual workflows, available network bandwidth as well as scalability. For these reasons, we are currently developing a new resource manager, called TARDIS. In this paper, we give an overview of the utilized technologies, the dynamic management, and integration of resources as well as the status of the I/O-based resource and job scheduling.


Sign in / Sign up

Export Citation Format

Share Document