Efficient approach for Monte Carlo simulation experiments and its applications to circuit systems design

Author(s):  
Chun-Hung Chen ◽  
K. Donohue ◽  
Jianwu Lin ◽  
E. Yucesan
1989 ◽  
Vol 26 (2) ◽  
pp. 214-221 ◽  
Author(s):  
Subhash Sharma ◽  
Srinivas Durvasula ◽  
William R. Dillon

The authors report some results on the behavior of alternative covariance structure estimation procedures in the presence of non-normal data. They conducted Monté Carlo simulation experiments with a factorial design involving three levels of skewness, three level of kurtosis, and three different sample sizes. For normal data, among all the elliptical estimation techniques, elliptical reweighted least squares (ERLS) was equivalent in performance to ML. However, as expected, for non-normal data parameter estimates were unbiased for ML and the elliptical estimation techniques, whereas the bias in standard errors was substantial for GLS and ML. Among elliptical estimation techniques, ERLS was superior in performance. On the basis of the simulation results, the authors recommend that researchers use ERLS for both normal and non-normal data.


2014 ◽  
Vol 9 (4) ◽  
pp. 505-519 ◽  
Author(s):  
Dilip Kumar

Purpose – The purpose of this paper is to test the efficient market hypothesis for major Indian sectoral indices by means of long memory approach in both time domain and frequency domain. This paper also tests the accuracy of the detrended fluctuation analysis (DFA) approach and the local Whittle (LW) approach by means of Monte Carlo simulation experiments. Design/methodology/approach – The author applies the DFA approach for the computation of the scaling exponent in the time domain. The robustness of the results is tested by the computation of the scaling exponent in the frequency domain by means of the LW estimator. The author applies moving sub-sample approach on DFA to study the evolution of market efficiency in Indian sectoral indices. Findings – The Monte Carlo simulation experiments indicate that the DFA approach and the LW approach provides good estimates of the scaling exponent as the sample size increases. The author also finds that the efficiency characteristics of Indian sectoral indices and their stages of development are dynamic in nature. Originality/value – This paper has both methodological and empirical originality. On the methodological side, the author tests the small sample properties of the DFA and the LW approaches by using simulated series of fractional Gaussian noise and find that both the approach possesses superior properties in terms of capturing the scaling behavior of asset prices. On the empirical side, the author studies the evolution of long-range dependence characteristics in Indian sectoral indices.


2015 ◽  
Vol 2 (1) ◽  
pp. 97
Author(s):  
Robert Anderson ◽  
Zhou Wei ◽  
Ian Cox ◽  
Malcolm Moore ◽  
Florence Kussener

Design of Experiments (DoE) is widely used in design, manufacturing and quality management. The resulting data is usually analysed with multiple linear regression to generate polynomial equations that describe the relationship between process inputs and outputs. These equations enable us to understand how input values affect the predicted value of one or more outputs and find good set points for the inputs. However, to develop robust manufacturing processes, we also need to understand how variation in these inputs appears as variation in the output. This understanding allows us to define set points and control tolerances for the inputs that will keep the outputs within their required specification windows. Tolerance analysis provides a powerful way of finding input settings and ranges that minimise output variation to produce a process that is robust. In many practical applications, tolerance analysis exploits Monte Carlo simulation of the polynomial model generated from DoE’s. This paper briefly describes tolerance analysis and then shows how Monte Carlo simulation experiments using space-filling designs can be used to find the input settings that result in a robust process. Using this approach, engineers can quickly and easily identify the key inputs responsible for transferring undesired variation to their process outputs and identify the set points and ranges that make their process as robust as possible. If the process is not sufficiently robust, they can rationally investigate different strategies to improve it. A case study approach is used to aid explanation and understanding.


2021 ◽  
Vol 4 (4) ◽  
pp. 155-165
Author(s):  
Aminu Suleiman Mohammed ◽  
Badamasi Abba ◽  
Abubakar G. Musa

For proper actualization of the phenomenon contained in some lifetime data sets, a generalization, extension or modification of classical distributions is required. In this paper, we introduce a new generalization of exponential distribution, called the generalized odd generalized exponential-exponential distribution. The proposed distribution can model lifetime data with different failure rates, including the increasing, decreasing, unimodal, bathtub, and decreasing-increasing-decreasing failure rates. Various properties of the model such as quantile function, moment, mean deviations, Renyi entropy, and order statistics.  We provide an approximation for the values of the mean, variance, skewness, kurtosis, and mean deviations using Monte Carlo simulation experiments. Estimating of the distribution parameters is performed using the maximum likelihood method, and Monte Carlo simulation experiments is used to assess the estimation method. The method of maximum likelihood is shown to provide a promising parameter estimates, and hence can be adopted in practice for estimating the parameters of the distribution. An application to real and simulated datasets indicated that the new model is superior to the fits than the other compared distributions


Author(s):  
Ulrik D. Nielsen ◽  
Jo̸rgen J. Jensen

The paper elaborates on the probabilistic assessment of a simplified model for the rolling of a ship in a stochastic seaway. The model can be easily integrated with a probabilistic tool which enables evaluations of numerical simulations by the first order reliability method (FORM) and by Monte Carlo simulation (MCS). Results are presented for synchronous roll as well as parametric roll, where e.g. mean outcrossing rates have been calculated. FORM offers an efficient approach for the computations, although the approach should be applied with care in cases of parametric roll. The paper also touches on issues such as ergodicity and transient versus stationary stages in the roll realisations.


2020 ◽  
pp. 845-853 ◽  
Author(s):  
Bsma Abdul Hameed ◽  
Abbas N. Salman ◽  
Bayda Atiya Kalaf

This paper deals with the estimation of the stress strength reliability for a component which has a strength that is independent on opposite lower and upper bound stresses, when the stresses and strength follow Inverse Kumaraswamy Distribution. D estimation approaches were applied, namely the maximum likelihood, moment, and shrinkage methods. Monte Carlo simulation experiments were performed to compare the estimation methods based on the mean squared error criteria.


2019 ◽  
Vol 51 (2) ◽  
pp. 339-357 ◽  
Author(s):  
Weiyong Ding ◽  
Rui Fang ◽  
Peng Zhao

AbstractIn this paper we treat a two-stage grouping procedure of building a k-out-of-n system from several clusters of components. We use a static framework in which the component reliabilities are fixed. Under such a framework, we address the impact of the selecting strategies, the sampling probabilities, and the component reliabilities on the constructed system’s reliability. An interesting finding is that the level of component reliabilities could be identified as a decisive factor in determining how the selecting strategies and the component reliabilities affect the system reliability. The new results generalize and extend those established earlier in the literature such as Di Crescenzo and Pellerey (2011), Hazra and Nanda (2014), Navarro, Pellerey, and Di Crescenzo (2015), and Hazra, Finkelstein, and Cha (2017). Several Monte Carlo simulation experiments are provided to illustrate the theoretical results.


1993 ◽  
Vol 316 ◽  
Author(s):  
Shyh-Horng Yang ◽  
David Lim ◽  
Steven J. Morris ◽  
AL F. Tasch

ABSTRACTIn this paper is reported a new approach for the Monte Carlo simulation of deeply-channeled implanted profiles in single-crystal silicon which has greatly improved efficiency. This approach has been successfully implemented in the UT Monte Carlo code (UT-MARLOWE). A time savings of up to 212X has been observed with a 4-stage simulation. A simulation of arsenic implants with 15 keV implant energy typically takes about 12 minutes on a workstation.


Sign in / Sign up

Export Citation Format

Share Document