Multiscale Transient Thermal Analysis of Microelectronics

2015 ◽  
Vol 137 (3) ◽  
Author(s):  
Banafsheh Barabadi ◽  
Satish Kumar ◽  
Valeriy Sukharev ◽  
Yogendra K. Joshi

In a microelectronic device, thermal transport needs to be simulated on scales ranging from tens of nanometers to hundreds of millimeters. High accuracy multiscale models are required to develop engineering tools for predicting temperature distributions with sufficient accuracy in such devices. A computationally efficient and accurate multiscale reduced order transient thermal modeling methodology was developed using a combination of two different approaches: “progressive zoom-in” method and “proper orthogonal decomposition (POD)” technique. The capability of this approach in handling several decades of length scales from “package” to “chip components” at a considerably lower computational cost, while maintaining satisfactory accuracy was demonstrated. A flip chip ball grid array (FCBGA) package was considered for demonstration. The transient temperature and heat fluxes calculated on the top and bottom walls of the embedded chip at the package level simulations are employed as dynamic boundary conditions for the chip level simulation. The chip is divided into ten function blocks. Randomly generated dynamic power sources are applied in each of these blocks. The temperature rise in the different layers of the chip calculated from the multiscale model is compared with a finite element (FE) model. The close agreement between two models confirms that the multiscale approach can predict temperature rise accurately for scenarios corresponding to different power sources in functional blocks, without performing detailed FE simulations, which significantly reduces computational effort.

Author(s):  
Banafsheh Barabadi ◽  
Satish Kumar ◽  
Valeriy Sukharev ◽  
Yogendra K. Joshi

Thermal transport in microelectronic devices spans length scales from tens of nanometers to hundreds of millimeters. One of the major challenges in maintaining quality and reliability in today’s microelectronic devices comes from the ever increasing level of integration in the device fabrication as well as the high level of current densities that are carried through the microchip during operation. Consequently, significant opportunities for energy efficiency exist at various levels of the length scale hierarchy by optimization of thermal management resources. In this study, we developed a computationally efficient and accurate multi-scale reduced order transient thermal methodology consisting of hybrid implementation of two different multi-scale approaches: 1. “Progressive Zoom-in” method and 2. “Proper Orthogonal Decomposition (POD)” technique. The suggested approach provides the ability to predict different thermal scenarios based on one representative thermal scenario, while maintaining the desired spatial and temporal accuracy. In this paper, a Flip Chip Ball Grid Array (FCBGA) package was considered for hybrid modeling. To demonstrate the capability of POD method in predicting different thermal scenarios, the chip is divided into ten function blocks. Each of these blocks had a different randomly generated dynamic power source. To validate this methodology, the results were compared with a finite element (FE) model developed in COMSOL®. The behavior of the POD model was in good agreements with the corresponding FE model. This close correlation provides the capability of predicting other thermal scenarios based on a smaller sample set which can significantly decrease the computational cost.


2007 ◽  
Vol 4 (1) ◽  
pp. 23-30 ◽  
Author(s):  
Kimmo Kaija ◽  
Pekka Heino

This paper is a case study of the thermal behavior of a stacked multichip package (SMCP). The aim is to measure temperature responses when heat is dissipated on different dice and to characterize the behavior with a compact thermal model (CTM) that accurately models steady-state and transient responses with a simple thermal RC -network. The measured package consists of three stacked layers, where each layer has one thinned flip chip attached die on an aramid interposer. The package's thermal responses were measured with thermal test dice that contain heaters and temperature sensors. The package was modeled with a finite element method (FEM) and the simulated temperature responses showed reasonable agreement with measured data. The FE model was further used to provide reference thermal data under different boundary conditions for CTM synthesis. The obtained CTM models accurately the steady-state and transient behavior and can be used as simplified model of the measured SMCP for further thermal analysis.


Author(s):  
Ratnesh Raj ◽  
Daipayan Sarkar ◽  
Ankur Jain

A large fraction of energy consumed in modern microelectronic devices and systems is taken up by memory access operations, which is expected to cause significant temperature rise. Since memory access operations are very short in duration, this is expected to inherently be a transient thermal phenomenon. Despite the critical importance of thermal management in microelectronics, not much work exists on understanding the nature of thermal transport during memory access operations. In this work, a mathematical model to predict the transient temperature rise within a 3D memory chip is presented. Most heat-generating memory access processes occur over a short timescale for which the thermal penetration depth is shorter than the die thickness. This enables the modeling of such processes independent of the nature of chip cooling by treating the chip as a semi-infinite medium. A semi-infinite Green’s function model is developed for one bank of memory on a single layer of a block of the memory chip. This model is validated against finite element simulation results. Validation is also carried out by comparison of the model against the analytical solution for a limiting case. The analytical model is used to analyze transient thermal effects of various memory access processes for multiple banks. These results will help develop an understanding of optimal layouts and processes for 3D memory chips, eventually leading to co-design tools that simultaneously improve thermal and electrical performance of 3D memory chips.


Author(s):  
Banafsheh Barabadi ◽  
Yogendra K. Joshi ◽  
Satish Kumar

A major challenge in maintaining quality and reliability in today’s microelectronics devices comes from the ever increasing level of integration in the device fabrication as well as the high level of current densities that are carried through the microchip during operation. Cyclic thermal events during operation, stemming from Joule heating of the metal lines, can lead to fatigue failure due to the varying thermal expansion coefficients of the different materials that compose the microchip package. To aid in the avoidance of such device failures, it is imperative to develop a predictive capability for the thermal response of micro-electronic circuits. This work studied the problem of transient Joule heating in interconnects in a two-dimensional (2D) inhomogeneous system using a reduced order modeling approach of the Proper Orthogonal Decomposition (POD) method and Galerkin Projection Technique. This study considers an interconnect structure embedded in the bulk of a microelectronic device. The effect of different types of current pulses, pulse duration, and pulse amplitude were investigated. By using a representative step function as the heat source, the model predicted the exact transient thermal behavior of the system for all other cases without generating any new observations, using just a few POD modes. To validate this unique capability, the result of the POD model was compared with a finite element (FE) model developed in LS-DYNA®. The behaviors of the POD models were in good agreements with the corresponding FE models. This close correlation provides the capability of predicting other cases based on a smaller sample set which can significantly decrease the computational cost.


Author(s):  
Wei Zhang ◽  
Saad Ahmed ◽  
Jonathan Hong ◽  
Zoubeida Ounaies ◽  
Mary Frecker

Different types of active materials have been used to actuate origami-inspired self-folding structures. To model the highly nonlinear deformation and material responses, as well as the coupled field equations and boundary conditions of such structures, high-fidelity models such as finite element (FE) models are needed but usually computationally expensive, which makes optimization intractable. In this paper, a computationally efficient two-stage optimization framework is developed as a systematic method for the multi-objective designs of such multifield self-folding structures where the deformations are concentrated in crease-like areas, active and passive materials are assumed to behave linearly, and low- and high-fidelity models of the structures can be developed. In Stage 1, low-fidelity models are used to determine the topology of the structure. At the end of Stage 1, a distance measure [Formula: see text] is applied as the metric to determine the best design, which then serves as the baseline design in Stage 2. In Stage 2, designs are further optimized from the baseline design with greatly reduced computing time compared to a full FEA-based topology optimization. The design framework is first described in a general formulation. To demonstrate its efficacy, this framework is implemented in two case studies, namely, a three-finger soft gripper actuated using a PVDF-based terpolymer, and a 3D multifield example actuated using both the terpolymer and a magneto-active elastomer, where the key steps are elaborated in detail, including the variable filter, metrics to select the best design, determination of design domains, and material conversion methods from low- to high-fidelity models. In this paper, analytical models and rigid body dynamic models are developed as the low-fidelity models for the terpolymer- and MAE-based actuations, respectively, and the FE model of the MAE-based actuation is generalized from previous work. Additional generalizable techniques to further reduce the computational cost are elaborated. As a result, designs with better overall performance than the baseline design were achieved at the end of Stage 2 with computing times of 15 days for the gripper and 9 days for the multifield example, which would rather be over 3 and 2 months for full FEA-based optimizations, respectively. Tradeoffs between the competing design objectives were achieved. In both case studies, the efficacy and computational efficiency of the two-stage optimization framework are successfully demonstrated.


Energies ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 118
Author(s):  
Feng Zhu ◽  
Runzhou Zhou ◽  
David J. Sypeck

In this work, a computational study was carried out to simulate crushing tests on lithium-ion vehicle battery modules. The tests were performed on commercial battery modules subject to wedge cutting at low speeds. Based on loading and boundary conditions in the tests, finite element (FE) models were developed using explicit FEA code LS-DYNA. The model predictions demonstrated a good agreement in terms of structural failure modes and force–displacement responses at both cell and module levels. The model was extended to study additional loading conditions such as indentation by a cylinder and a rectangular block. The effect of other module components such as the cover and cooling plates was analyzed, and the results have the potential for improving battery module safety design. Based on the detailed FE model, to reduce its computational cost, a simplified model was developed by representing the battery module with a homogeneous material law. Then, all three scenarios were simulated, and the results show that this simplified model can reasonably predict the short circuit initiation of the battery module.


Author(s):  
Alessandra Cuneo ◽  
Alberto Traverso ◽  
Shahrokh Shahpar

In engineering design, uncertainty is inevitable and can cause a significant deviation in the performance of a system. Uncertainty in input parameters can be categorized into two groups: aleatory and epistemic uncertainty. The work presented here is focused on aleatory uncertainty, which can cause natural, unpredictable and uncontrollable variations in performance of the system under study. Such uncertainty can be quantified using statistical methods, but the main obstacle is often the computational cost, because the representative model is typically highly non-linear and complex. Therefore, it is necessary to have a robust tool that can perform the uncertainty propagation with as few evaluations as possible. In the last few years, different methodologies for uncertainty propagation and quantification have been proposed. The focus of this study is to evaluate four different methods to demonstrate strengths and weaknesses of each approach. The first method considered is Monte Carlo simulation, a sampling method that can give high accuracy but needs a relatively large computational effort. The second method is Polynomial Chaos, an approximated method where the probabilistic parameters of the response function are modelled with orthogonal polynomials. The third method considered is Mid-range Approximation Method. This approach is based on the assembly of multiple meta-models into one model to perform optimization under uncertainty. The fourth method is the application of the first two methods not directly to the model but to a response surface representing the model of the simulation, to decrease computational cost. All these methods have been applied to a set of analytical test functions and engineering test cases. Relevant aspects of the engineering design and analysis such as high number of stochastic variables and optimised design problem with and without stochastic design parameters were assessed. Polynomial Chaos emerges as the most promising methodology, and was then applied to a turbomachinery test case based on a thermal analysis of a high-pressure turbine disk.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. S317-S331 ◽  
Author(s):  
Jianfeng Zhang ◽  
Zhengwei Li ◽  
Linong Liu ◽  
Jin Wang ◽  
Jincheng Xu

We have improved the so-called deabsorption prestack time migration (PSTM) by introducing a dip-angle domain stationary-phase implementation. Deabsorption PSTM compensates absorption and dispersion via an actual wave propagation path using effective [Formula: see text] parameters that are obtained during migration. However, noises induced by the compensation degrade the resolution gained and deabsorption PSTM requires more computational effort than conventional PSTM. Our stationary-phase implementation improves deabsorption PSTM through the determination of an optimal migration aperture based on an estimate of the Fresnel zone. This significantly attenuates the noises and reduces the computational cost of 3D deabsorption PSTM. We have estimated the 2D Fresnel zone in terms of two dip angles through building a pair of 1D migrated dip-angle gathers using PSTM. Our stationary-phase QPSTM (deabsorption PSTM) was implemented as a two-stage process. First, we used conventional PSTM to obtain the Fresnel zones. Then, we performed deabsorption PSTM with the Fresnel-zone-based optimized migration aperture. We applied stationary-phase QPSTM to a 3D field data. Comparison with synthetic seismogram generated from well log data validates the resolution enhancements.


Sign in / Sign up

Export Citation Format

Share Document