scholarly journals An L-Moment-Based Analog for the Schmeiser-Deutsch Class of Distributions

2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
Todd C. Headrick ◽  
Mohan D. Pant

This paper characterizes the conventional moment-based Schmeiser-Deutsch (S-D) class of distributions through the method of L-moments. The system can be used in a variety of settings such as simulation or modeling various processes. A procedure is also described for simulating S-D distributions with specified L-moments and L-correlations. The Monte Carlo results presented in this study indicate that the estimates of L-skew, L-kurtosis, and L-correlation associated with the S-D class of distributions are substantially superior to their corresponding conventional product-moment estimators in terms of relative bias—most notably when sample sizes are small.

2012 ◽  
Vol 2012 ◽  
pp. 1-23 ◽  
Author(s):  
Todd C. Headrick ◽  
Mohan D. Pant

This paper introduces a standard logistic L-moment-based system of distributions. The proposed system is an analog to the standard normal conventional moment-based Tukey g-h, g, h, and h-h system of distributions. The system also consists of four classes of distributions and is referred to as (i) asymmetric -, (ii) log-logistic , (iii) symmetric , and (iv) asymmetric -. The system can be used in a variety of settings such as simulation or modeling events—most notably when heavy-tailed distributions are of interest. A procedure is also described for simulating -, , , and - distributions with specified L-moments and L-correlations. The Monte Carlo results presented in this study indicate that estimates of L-skew, L-kurtosis, and L-correlation associated with the -, , , and - distributions are substantially superior to their corresponding conventional product-moment estimators in terms of relative bias and relative standard error.


2021 ◽  
Vol 3 (1) ◽  
pp. 61-89
Author(s):  
Stefan Geiß

Abstract This study uses Monte Carlo simulation techniques to estimate the minimum required levels of intercoder reliability in content analysis data for testing correlational hypotheses, depending on sample size, effect size and coder behavior under uncertainty. The ensuing procedure is analogous to power calculations for experimental designs. In most widespread sample size/effect size settings, the rule-of-thumb that chance-adjusted agreement should be ≥.80 or ≥.667 corresponds to the simulation results, resulting in acceptable α and β error rates. However, this simulation allows making precise power calculations that can consider the specifics of each study’s context, moving beyond one-size-fits-all recommendations. Studies with low sample sizes and/or low expected effect sizes may need coder agreement above .800 to test a hypothesis with sufficient statistical power. In studies with high sample sizes and/or high expected effect sizes, coder agreement below .667 may suffice. Such calculations can help in both evaluating and in designing studies. Particularly in pre-registered research, higher sample sizes may be used to compensate for low expected effect sizes and/or borderline coding reliability (e.g. when constructs are hard to measure). I supply equations, easy-to-use tables and R functions to facilitate use of this framework, along with example code as online appendix.


2021 ◽  
Vol 19 (1) ◽  
pp. 2-25
Author(s):  
Seongah Im

This study examined performance of the beta-binomial model in comparison with GEE using clustered binary responses resulting in non-normal outcomes. Monte Carlo simulations were performed under varying intracluster correlations and sample sizes. The results showed that the beta-binomial model performed better for small sample, while GEE performed well under large sample.


2020 ◽  
Vol 40 (3) ◽  
pp. 314-326 ◽  
Author(s):  
Anna Heath ◽  
Natalia Kunst ◽  
Christopher Jackson ◽  
Mark Strong ◽  
Fernando Alarid-Escudero ◽  
...  

Background. Investing efficiently in future research to improve policy decisions is an important goal. Expected value of sample information (EVSI) can be used to select the specific design and sample size of a proposed study by assessing the benefit of a range of different studies. Estimating EVSI with the standard nested Monte Carlo algorithm has a notoriously high computational burden, especially when using a complex decision model or when optimizing over study sample sizes and designs. Recently, several more efficient EVSI approximation methods have been developed. However, these approximation methods have not been compared, and therefore their comparative performance across different examples has not been explored. Methods. We compared 4 EVSI methods using 3 previously published health economic models. The examples were chosen to represent a range of real-world contexts, including situations with multiple study outcomes, missing data, and data from an observational rather than a randomized study. The computational speed and accuracy of each method were compared. Results. In each example, the approximation methods took minutes or hours to achieve reasonably accurate EVSI estimates, whereas the traditional Monte Carlo method took weeks. Specific methods are particularly suited to problems where we wish to compare multiple proposed sample sizes, when the proposed sample size is large, or when the health economic model is computationally expensive. Conclusions. As all the evaluated methods gave estimates similar to those given by traditional Monte Carlo, we suggest that EVSI can now be efficiently computed with confidence in realistic examples. No systematically superior EVSI computation method exists as the properties of the different methods depend on the underlying health economic model, data generation process, and user expertise.


Methodology ◽  
2009 ◽  
Vol 5 (2) ◽  
pp. 60-70 ◽  
Author(s):  
W. Holmes Finch ◽  
Teresa Davenport

Permutation testing has been suggested as an alternative to the standard F approximate tests used in multivariate analysis of variance (MANOVA). These approximate tests, such as Wilks’ Lambda and Pillai’s Trace, have been shown to perform poorly when assumptions of normally distributed dependent variables and homogeneity of group covariance matrices were violated. Because Monte Carlo permutation tests do not rely on distributional assumptions, they may be expected to work better than their approximate cousins when the data do not conform to the assumptions described above. The current simulation study compared the performance of four standard MANOVA test statistics with their Monte Carlo permutation-based counterparts under a variety of conditions with small samples, including conditions when the assumptions were met and when they were not. Results suggest that for sample sizes of 50 subjects, power is very low for all the statistics. In addition, Type I error rates for both the approximate F and Monte Carlo tests were inflated under the condition of nonnormal data and unequal covariance matrices. In general, the performance of the Monte Carlo permutation tests was slightly better in terms of Type I error rates and power when both assumptions of normality and homogeneous covariance matrices were not met. It should be noted that these simulations were based upon the case with three groups only, and as such results presented in this study can only be generalized to similar situations.


1980 ◽  
Vol 5 (4) ◽  
pp. 309-335 ◽  
Author(s):  
R. Clifford Blair ◽  
James J. Higgins

Computer generated Monte Carlo techniques were used to compare the power of Wilcoxon's rank-sum test to the power of the two independent means t test for situations in which samples were drawn from (1) uniform, (2) Laplace, (3) half-normal, (4) exponential, (5) mixed-normal, and (6) mixed-uniform distributions. Sample sizes studied were ( n1, n2) = (3,9), (6,6), (9,27), (18,18), (27,81), and (54,54). It was concluded that (1) generally speaking, the Wilcoxon statistic held very large power advantages over the t statistic, (2) asymptotic relative efficiencies were reasonably good indicators of the relative power of the two statistics, (3) results obtained from smaller samples were often markedly different from the results obtained from larger samples, and (4) because of the narrow ranges of population shapes and sample sizes investigated in some widely cited previous studies of this type, the conclusions reached in those studies must now be deemed questionable.


2018 ◽  
Author(s):  
Andre Kretzschmar ◽  
Gilles Gignac

We conducted a Monte-Carlo simulation within a latent variable framework by varying the following characteristics: population correlation (ρ = .10, .20, .30, .40, .50, .60, .70, .80, .90, and 1.00) and composite score reliability (coefficient omega: ω = .40, .50, .60, .70, .80, and .90). The sample sizes required to estimate stable measurement-error-free correlations were found to approach N = 490 for typical research scenarios (population correlation ρ = .20; composite score reliability ω = .70) and as high as N = 1,000+ for data associated with lower, but still sometimes observed, reliabilities (ω = .40 to .50). We encourage researchers to take into consideration reliability, when evaluating the sample sizes required to produce stable measurement-error-free correlations.


2012 ◽  
Vol 2012 ◽  
pp. 1-19 ◽  
Author(s):  
Todd C. Headrick ◽  
Mohan D. Pant

This paper introduces a new family of generalized lambda distributions (GLDs) based on a method of doubling symmetric GLDs. The focus of the development is in the context of L-moments and L-correlation theory. As such, included is the development of a procedure for specifying double GLDs with controlled degrees of L-skew, L-kurtosis, and L-correlations. The procedure can be applied in a variety of settings such as modeling events and Monte Carlo or simulation studies. Further, it is demonstrated that estimates of L-skew, L-kurtosis, and L-correlation are substantially superior to conventional product-moment estimates of skew, kurtosis, and Pearson correlation in terms of both relative bias and efficiency when heavy tailed distributions are of concern.


Sign in / Sign up

Export Citation Format

Share Document