Stacking Yield Prediction of Package-on-Package Assembly Using Advanced Uncertainty Propagation Analysis: Part I Stochastic Model Development

2019 ◽  
Vol 142 (1) ◽  
Author(s):  
Hsiu-Ping Wei ◽  
Yu-Hsiang Yang ◽  
Bongtae Han

A comprehensive stochastic model is proposed to predict Package-on-Package (PoP) stacking yield loss. The model takes into account all pad locations at the stacking interface while considering the statistical variations of the warpages and the solder ball heights of both top and bottom packages. The goal is achieved by employing three statistical methods: (1) an advanced approximate integration-based method called eigenvector dimension reduction (EDR) method to conduct uncertainty propagation (UP) analyses, (2) the stress-strength interference (SSI) model to determine the noncontact probability at a single pad, and (3) the union of events considering the statistical dependence to calculate the final yield loss. In this first part, theoretical development of the proposed stochastic model is presented. Implementation of the proposed model is presented in a companion paper.

2019 ◽  
Vol 142 (1) ◽  
Author(s):  
Hsiu-Ping Wei ◽  
Yu-Hsiang Yang ◽  
Bongtae Han

The stochastic model for yield loss prediction proposed in Part I is implemented for a package-on-package (PoP) assembly. The assembly consists of a stacked die thin flat ball grid array (TFBGA) as the top package and a flip chip ball grid array (fcBGA) as the bottom package. The top and bottom packages are connected through 216 solder joints of 0.5 mm pitch in two peripheral rows. The warpage values of the top and bottom package are calculated by finite element analysis (FEA), and the corresponding probability of density functions (PDFs) are obtained by the eigenvector dimension reduction (EDR) method. The solder ball heights of the top and bottom package and the corner pad joint heights are determined by surface evolver, and their PDFs are determined by the EDR method, too. Only 137 modeling runs are conducted to obtain all 549 PDFs in spite of the large number of input variables considered in the study (27 input variables). Finally, the noncontact open-induced staking yield loss of the PoP assembly is predicted from the PDFs.


2006 ◽  
Vol 128 (2) ◽  
pp. 588-597 ◽  
Author(s):  
Thaweepat Buranathiti ◽  
Jian Cao ◽  
Wei Chen ◽  
Lusine Baghdasaryan ◽  
Z. Cedric Xia

Model validation has become an increasingly important issue in the decision-making process for model development, as numerical simulations have widely demonstrated their benefits in reducing development time and cost. Frequently, the trustworthiness of models is inevitably questioned in this competitive and demanding world. By definition, model validation is a means to systematically establish a level of confidence of models. To demonstrate the processes of model validation for simulation-based models, a sheet metal flanging process is used as an example with the objective that is to predict the final geometry, or springback. This forming process involves large deformation of sheet metals, contact between tooling and blanks, and process uncertainties. The corresponding uncertainties in material properties and process conditions are investigated and taken as inputs to the uncertainty propagation, where metamodels, known as a model of the model, are developed to efficiently and effectively compute the total uncertainty/variation of the final configuration. Three model validation techniques (graphical comparison, confidence interval technique, and r2 technique) are applied and examined; furthermore, strength and weakness of each technique are examined. The latter two techniques offer a broader perspective due to the involvement of statistical and uncertainty analyses. The proposed model validation approaches reduce the number of experiments to one for each design point by shifting the evaluation effort to the uncertainty propagation of the simulation model rather than using costly physical experiments.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 772 ◽  
Author(s):  
Houshyar Honar Pajooh ◽  
Mohammad Rashid ◽  
Fakhrul Alam ◽  
Serge Demidenko

The proliferation of smart devices in the Internet of Things (IoT) networks creates significant security challenges for the communications between such devices. Blockchain is a decentralized and distributed technology that can potentially tackle the security problems within the 5G-enabled IoT networks. This paper proposes a Multi layer Blockchain Security model to protect IoT networks while simplifying the implementation. The concept of clustering is utilized in order to facilitate the multi-layer architecture. The K-unknown clusters are defined within the IoT network by applying techniques that utillize a hybrid Evolutionary Computation Algorithm while using Simulated Annealing and Genetic Algorithms. The chosen cluster heads are responsible for local authentication and authorization. Local private blockchain implementation facilitates communications between the cluster heads and relevant base stations. Such a blockchain enhances credibility assurance and security while also providing a network authentication mechanism. The open-source Hyperledger Fabric Blockchain platform is deployed for the proposed model development. Base stations adopt a global blockchain approach to communicate with each other securely. The simulation results demonstrate that the proposed clustering algorithm performs well when compared to the earlier reported approaches. The proposed lightweight blockchain model is also shown to be better suited to balance network latency and throughput as compared to a traditional global blockchain.


2021 ◽  
Vol 99 (Supplement_1) ◽  
pp. 55-56
Author(s):  
Christian D Ramirez-Camba ◽  
Crystal L Levesque

Abstract A mechanistic model was developed with the objective to characterize weight gain and essential amino acid (EAA) deposition in the different tissue pools that make up the pregnant sow: placenta, allantoic fluid, amniotic fluid, fetus, uterus, mammary gland, and maternal body were considered. The data used in this modelling approach were obtained from published scientific articles reporting weights, crude protein (CP), and EAA composition in the previously mentioned tissues; studies reporting not less than 5 datapoints across gestation were considered. A total of 12 scientific articles published between 1977 and 2020 were selected for the development of the model and the model was validated using 11 separate scientific papers. The model consists of three connected sub-models: protein deposition (Pd) model, weight gain model, and EAA deposition model. Weight gain, Pd, and EAA deposition curves were developed with nonparametric statistics using splines regression. The validation of the model showed a strong agreement between observed and predicted growth (r2 = 0.92, root mean square error = 3%). The proposed model also offered descriptive insights into the weight gain and Pd during gestation. The model suggests that the definition of time-dependent Pd is more accurately described as an increase in fluid deposition during mid-gestation coinciding with a reduction in Pd. In addition, due to differences in CP composition between pregnancy-related tissues and maternal body, Pd by itself may not be the best measurement criteria for the estimation of EAA requirement in pregnant sows. The proposed model also captures the negative maternal Pd that occurs in late gestation and indicates that litter size influences maternal tissue mobilization more than parity. The model predicts that the EAA requirements in early and mid-gestation are 75, 55 and 50% lower for primiparous sows than parity 2, 3 and 4+ sows, respectively, which suggest the potential benefits of parity segregated feeding.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8058
Author(s):  
Christian E. Galarza ◽  
Jonathan M. Palma ◽  
Cecilia F. Morais ◽  
Jaime Utria ◽  
Leonardo P. Carvalho ◽  
...  

This paper proposes a new theoretical stochastic model based on an abstraction of the opportunistic model for opportunistic networks. The model is capable of systematically computing the network parameters, such as the number of possible routes, the probability of successful transmission, the expected number of broadcast transmissions, and the expected number of receptions. The usual theoretical stochastic model explored in the methodologies available in the literature is based on Markov chains, and the main novelty of this paper is the employment of a percolation stochastic model, whose main benefit is to obtain the network parameters directly. Additionally, the proposed approach is capable to deal with values of probability specified by bounded intervals or by a density function. The model is validated via Monte Carlo simulations, and a computational toolbox (R-packet) is provided to make the reproduction of the results presented in the paper easier. The technique is illustrated through a numerical example where the proposed model is applied to compute the energy consumption when transmitting a packet via an opportunistic network.


Author(s):  
Iason Grigoratos ◽  
Ellen Rathje ◽  
Paolo Bazzurro ◽  
Alexandros Savvaidis

ABSTRACT In the past decade, several parts of central United States, including Oklahoma, have experienced unprecedented seismicity rates, following an increase in the volumes of wastewater fluids that are being disposed underground. In this article, we present a semi-empirical model to hindcast the observed seismicity given the injection time history. Our proposed recurrence model is a modified version of the Gutenberg–Richter relation, building upon the seismogenic index model, which predicts a linear relationship between the number of induced events and the injected volume. Our methodology accounts for the effects of spatiotemporal pore-pressure diffusion, the stressing-rate dependency of the time lag between injection and seismicity rate changes, and the rapid cessation of seismicity upon unloading. We also introduced a novel multiscale regression, which enabled us to produce grid-independent results of increased spatial resolution. Although the model is generic to be applicable in any region and has essentially only two free parameters for spatial calibration, it matches the earthquake time history of Oklahoma well across various scales, for both increasing and decreasing injection rates. In the companion paper (Grigoratos, Rathje, et al., 2020), we employ the model to distinguish the disposal-induced seismicity from the expected tectonic seismicity and test its forecasting potential.


Author(s):  
Lee J. Wells ◽  
Byeng D. Youn ◽  
Zhimin Xi

This paper presents an innovative approach for quality engineering using the Eigenvector Dimension Reduction (EDR) Method. Currently industry relies heavily upon the use of the Taguchi method and Signal to Noise (S/N) ratios as quality indices. However, some disadvantages of the Taguchi method exist such as, its reliance upon samples occurring at specified levels, results to be valid at only the current design point, and its expensiveness to maintain a certain level of confidence. Recently, it has been shown that the EDR method can accurately provide an analysis of variance, similar to that of the Taguchi method, but is not hindered by the aforementioned drawbacks of the Taguchi method. This is evident because the EDR method is based upon fundamental statistics, where the statistical information for each design parameter is used to estimate the uncertainty propagation through engineering systems. Therefore, the EDR method provides much more extensive capabilities than the Taguchi method, such as the ability to estimate not only mean and standard deviation of the response, but also the skewness and kurtosis. The uniqueness of the EDR method is its ability to generate the probability density function (PDF) of system performances. This capability, known as the probabilistic “what-if” study, provides a visual representation of the effects of the design parameters (e.g., its mean and variance) upon the response. In addition, the probabilistic “what-if” study can be applied across multiple design parameters, allowing the analysis of interactions among control factors. Furthermore, the implementation of the probabilistic “what-if” study provides a basis for performing robust design optimization. Because of these advantages, it is apparent that the EDR method provides an alternative platform of quality engineering to the Taguchi method. For easy execution by field engineers, the proposed platform for quality engineering using the EDR method, known as Quick Quality Quantification (Q3), will be developed as a Microsoft EXCEL add-in.


2007 ◽  
Vol 34 (11) ◽  
pp. 1506-1517 ◽  
Author(s):  
Kyoung-Kyu Choi ◽  
Shelley L. Lissel ◽  
Mahmoud M. Reda Taha

In the present study, masonry creep was experimentally investigated. Creep tests were performed on masonry prisms, which were produced using standard fired clay brick and standard Type S mortar. A total of 11 sets of loaded and unloaded masonry specimens were tested under sustained load with three main parameters: stress level, masonry age at loading, and relative humidity. The unloaded prisms compensated for the effects of shrinkage. In this article, the ability of a number of rheological models reported in the literature are examined for their ability to predict masonry creep. Moreover, a new rheological model, one that considers the effect of stress level and masonry age at loading, is proposed. The system parameters of the proposed model were identified using the experimental data. The proposed model was then validated using masonry creep data that was reported by other researchers, but not used in model development. It is shown that the creep behaviour of masonry can be modelled with good accuracy using the proposed rheological model.


2019 ◽  
Vol 11 (5) ◽  
pp. 1285 ◽  
Author(s):  
Faramarz Khosravi ◽  
Gokhan Izbirak ◽  
Kehinde Adewale Adesina

As the global environment is getting more competitive, sustainability is increasingly becoming an important assessment tool. An exponential distribution stochastic model is developed for the purpose of assessing and measuring the sustainability of healthcare system. The aim of this study is to provide a sustainability measuring model that is driven by the actual distribution status of the sustainability indicators. In this paper, the notions of the “Triple Bottom Line” (TBL) are followed in deriving the sustainability challenge and capacity indicators for the environmental, social, and economic indicators. Since basic challenges and capacities depend on the modes of the organization, the study proposes an exponentially distributed stochastic model for measuring sustainability. A numerical illustration of Iranian healthcare is presented to demonstrate the efficiency of the proposed model. In the results obtained, sustainability index for environmental, economic, and social are 54.40%, 48.80%, and 66.80% respectively. It indicates the healthcare achieved some sustainability through the social aspect; therefore, improving the environmental and economic aspect of the TBL is necessary. The proposed model can be used as a panoramic tool for effective measurement of the sustainability level of any healthcare system.


Sign in / Sign up

Export Citation Format

Share Document