3D Stacks of Microprocessors and Memories With Backside Two-Phase Multi-Microchannel Cooler

Author(s):  
Jackson B. Marcinichen ◽  
John R. Thome

For the next generation of high performance computers, the new challenges are to shorten the distance for transporting data (to accelerate the transfer of information) between multi-microprocessors and memories, and to cool these electronic components despite the increased heat flux that results from increased transistor density. Recent technological advances show a tendency for the development of 3D integrated circuit stacked architectures with interlayer cooling (multi-microchannels in the silicon layers). However, huge challenges exist in such design/concept, i.e. flow distribution to hundreds microchannels distributed in the different interlayers, thermo-hydrodynamic and geometrical limitations, manufacturing etc. 3D-ICs with interlayer cooling are still about a decade away, so a viable shorter term goal is 3D stacks with backside cooling, taking advantage of Si layers now able to be thineer down to only 50 μm thickness. Thus, the present work presents thermo-hydrodynamic simulations for 3D stacks considering only a backside cooler, which simplifies considerably the assembly and guarantees a high level of reliability. In summary, the results showed that this concept is thermally feasible and potentially that interlayer microchannels (between stacks) will not be necessary.

2014 ◽  
Vol 136 (2) ◽  
Author(s):  
Yassir Madhour ◽  
Brian P. d'Entremont ◽  
Jackson Braz Marcinichen ◽  
Bruno Michel ◽  
John Richard Thome

Three-dimensional (3D) stacking of integrated-circuit (IC) dies increases system density and package functionality by vertically integrating two or more dies with area-array through-silicon-vias (TSVs). This reduces the length of global interconnects and the signal delay time and allows improvements in energy efficiency. However, the accumulation of heat fluxes and thermal interface resistances is a major limitation of vertically integrated packages. Scalable cooling solutions, such as two-phase interlayer cooling, will be required to extend 3D stacks beyond the most modest numbers of dies. This paper introduces a realistic 3D chip stack along with a simulation method for the heat spreading and flow distribution among the channels of the evaporators. The model includes the significant sensitivity of each channel's friction factor to vapor quality, and hence mass flow to heat flux, which characterizes parallel two-phase flows. Simulation cases explore various placements of hot spots within the stack and effects which are unique to two-phase interlayer cooling. The results show that the effect of hot spots on individual dies can be mitigated by strong interlayer heat conduction if the relative position of the hot spots is selected carefully to result in a heat load and flow which are well balanced laterally.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 838
Author(s):  
Seddon Atkinson ◽  
Anna Detkina ◽  
Dzianis Litskevich ◽  
Bruno Merk

High fidelity modelling for nuclear power plant analysis is becoming more common due to advances in modelling software and the availability of high-performance computers. However, to design, develop and regulate new light water nuclear reactors there are, up until now, limited requirements for high fidelity methods due to the already well-established computational methods already being widely accepted. This article explores the additional detail which can be obtained when using high fidelity methods through Monte Carlo/Sub-channel analysis compared to industrial methods of cross-section/nodal analysis using the Advanced Boiling Water Reactor as a case study. This case study was chosen due to the challenges in modelling two phase flow and the high levels of heterogeneity within the fuel assembly design. The article investigates how to implement such an approach, from a bottom-up procedure, by analysing each stage of the modelling process.


2020 ◽  
Author(s):  
Miroslav Kratochvíl ◽  
Oliver Hunewald ◽  
Laurent Heirendt ◽  
Vasco Verissimo ◽  
Jiří Vondrášek ◽  
...  

AbstractBackgroundThe amount of data generated in large clinical and phenotyping studies that use single-cell cytometry is constantly growing. Recent technological advances allow to easily generate data with hundreds of millions of single-cell data points with more than 40 parameters, originating from thousands of individual samples. The analysis of that amount of high-dimensional data becomes demanding in both hardware and software of high-performance computational resources. Current software tools often do not scale to the datasets of such size; users are thus forced to down-sample the data to bearable sizes, in turn losing accuracy and ability to detect many underlying complex phenomena.ResultsWe present GigaSOM.jl, a fast and scalable implementation of clustering and dimensionality-reduction for flow and mass cytometry data. The implementation of GigaSOM.jl in the high-level and high-performance programming language Julia makes it accessible to the scientific community, and allows for efficient handling and processing of datasets with billions of data points using distributed computing infrastructures. We describe the design of GigaSOM.jl, measure its performance and horizontal scaling capability, and showcase the functionality on a large dataset from a recent study.ConclusionsGigaSOM.jl facilitates utilization of the commonly available high-performance computing resources to process the largest available datasets within minutes, while producing results of the same quality as the current state-of-art software. Measurements indicate that the performance scales to much larger datasets. The example use on the data from an massive mouse phenotyping effort confirms the applicability of GigaSOM.jl to huge-scale studies.Key pointsGigaSOM.jl improves the applicability of FlowSOM-style single-cell cytometry data analysis by increasing the acceptable dataset size to billions of single cells.Significant speedup over current methods is achieved by distributed processing and utilization of efficient algorithms.GigaSOM.jl package includes support for fast visualization of multidimensional data.


2001 ◽  
Vol 10 (02) ◽  
pp. 163-168 ◽  
Author(s):  
A. RUBIYANTO ◽  
H. HERRMANN ◽  
R. RICKEN ◽  
F. TIAN ◽  
W. SOHLER

A high performance integrated acousto-optical heterodyne interferometer has been developed for vibration measurement. All components including an acousto-optical TE–TM mode converters, two electro-optical TE–TM converters, two polarization splitters and two phase shifters are integrated on a X-cut Lithium Niobate substrate. The fully packaged optical integrated circuit (optical-IC) coupling with three fibers optics pigtails gave a signal-to-noise ratio of 69 dB with at 3 kHz bandwidth by using a commercial DFB laser diode as a light source with 1561 nm emission wavelength and a PIN-FET balanced receiver.


1970 ◽  
Vol 18 (4) ◽  
Author(s):  
Oleksandr I. Popovskyi

Relevance of material presented in the article, due to extensive use of high-performance computers to create modern information systems, including the NAPS of Ukraine. Most computers in NAPS of Ukraine work on Intel Pentium processors at speeds from 600 MHz to 3 GHz and release a lot of heat, which requires the installation of the system unit 2-3 additional fans. The fan is always works on full power, that leads to rapid deterioration and high level (up to 50 dB) noise. In order to meet ergonomic requirements it is proposed to іnstall a computer system unit and an additional control unit ventilators, allowing independent control of each fan. The solution is applied at creation of information systems planning research in the National Academy of Pedagogical Sciences of Ukraine on Internet basis.


2010 ◽  
Vol 20 (1) ◽  
pp. 32-37 ◽  
Author(s):  
Lindsay Scattergood ◽  
Charles J. Limb

Abstract As a result of the widespread use of cochlear implants, individuals with profound hearing loss now are able to hear sounds ranging from a syllable to a symphony. This form of “electric hearing” has been remarkably successful in providing sound to the deaf population and at least 100,000 implantation procedures have been performed worldwide in more than 80 countries (Clark, 2008). Today, it is routine for post-lingual deafened individuals (one who lost their hearing after normal childhood language acquisition) to achieve high performance on language tests following implantation (Lalwani, Larky, Wareing, Kwast, & Schindler, 1998). Deaf children implanted at an early age with a CI usually develop excellent spoken language skills, with placement into mainstream educational schooling (Francis, Koch, Wyatt, & Niparko, 1999). The overwhelming emphasis on language perception in CI users has led to relative neglect of non-linguistic sound perception. Yet, the auditory world consists of many other sounds besides those of spoken language. Of all non-linguistic sounds, perception of music—particularly pitch and timbre—represents the greatest challenge for implant-mediated listening (Limb, 2006). High-level perception of music rarely is attained through conventional speech processing technology in adults or children. Recent technological advances, however, have increased the processing capabilities of modern CIs and hold great promise for music perception and quality of life for children with cochlear implants (Lassaletta et al., 2007).


GigaScience ◽  
2020 ◽  
Vol 9 (11) ◽  
Author(s):  
Miroslav Kratochvíl ◽  
Oliver Hunewald ◽  
Laurent Heirendt ◽  
Vasco Verissimo ◽  
Jiří Vondrášek ◽  
...  

Abstract Background The amount of data generated in large clinical and phenotyping studies that use single-cell cytometry is constantly growing. Recent technological advances allow the easy generation of data with hundreds of millions of single-cell data points with >40 parameters, originating from thousands of individual samples. The analysis of that amount of high-dimensional data becomes demanding in both hardware and software of high-performance computational resources. Current software tools often do not scale to the datasets of such size; users are thus forced to downsample the data to bearable sizes, in turn losing accuracy and ability to detect many underlying complex phenomena. Results We present GigaSOM.jl, a fast and scalable implementation of clustering and dimensionality reduction for flow and mass cytometry data. The implementation of GigaSOM.jl in the high-level and high-performance programming language Julia makes it accessible to the scientific community and allows for efficient handling and processing of datasets with billions of data points using distributed computing infrastructures. We describe the design of GigaSOM.jl, measure its performance and horizontal scaling capability, and showcase the functionality on a large dataset from a recent study. Conclusions GigaSOM.jl facilitates the use of commonly available high-performance computing resources to process the largest available datasets within minutes, while producing results of the same quality as the current state-of-art software. Measurements indicate that the performance scales to much larger datasets. The example use on the data from a massive mouse phenotyping effort confirms the applicability of GigaSOM.jl to huge-scale studies.


2020 ◽  
Author(s):  
James McDonagh ◽  
William Swope ◽  
Richard L. Anderson ◽  
Michael Johnston ◽  
David J. Bray

Digitization offers significant opportunities for the formulated product industry to transform the way it works and develop new methods of business. R&D is one area of operation that is challenging to take advantage of these technologies due to its high level of domain specialisation and creativity but the benefits could be significant. Recent developments of base level technologies such as artificial intelligence (AI)/machine learning (ML), robotics and high performance computing (HPC), to name a few, present disruptive and transformative technologies which could offer new insights, discovery methods and enhanced chemical control when combined in a digital ecosystem of connectivity, distributive services and decentralisation. At the fundamental level, research in these technologies has shown that new physical and chemical insights can be gained, which in turn can augment experimental R&D approaches through physics-based chemical simulation, data driven models and hybrid approaches. In all of these cases, high quality data is required to build and validate models in addition to the skills and expertise to exploit such methods. In this article we give an overview of some of the digital technology demonstrators we have developed for formulated product R&D. We discuss the challenges in building and deploying these demonstrators.<br>


Sign in / Sign up

Export Citation Format

Share Document