scholarly journals A Minimax Approach to Mapping Partial Interval Uncertainties into Point Estimates

Author(s):  
Vadim Romanuke
2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Author(s):  
Kuen-Suan Chen ◽  
Tsang-Chuan Chang ◽  
Yun-Tsan Lin

In the face of fierce global competition, firms are outsourcing important but nonessential tasks to external professional companies. Corporations are also turning from competitive business models to cooperative strategic partnerships in hopes of swiftly responding to consumer needs and enhancing overall efficiency and industry competitiveness. This research developed an outsourcing partner selection model in hopes of helping firms select better outsourcing partners for long-term collaborations. Process quality and manufacturing time are vital when evaluating outsourcing partner. We therefore used process capability index [Formula: see text] and manufacturing time performance index [Formula: see text] in the proposed model. Sample data from random samples are needed to calculate the point estimates of indices, however, it is impossible to obtain a sample with a structure completely identical to that of the population, which means that sampling generates unavoidable sampling errors. The reliability of point estimates are also uncertain, which inevitably leads to misjudgment in some cases. Thus, to reduce estimate errors and increase assessment reliability, we calculated the [Formula: see text]% confidence intervals of the indices [Formula: see text] and [Formula: see text], then constructed the joint confidence region of [Formula: see text] and [Formula: see text] to develop an outsourcing partner selection model that will help firms select better outsourcing partners for long-term collaborations. We also provide a case as an illustration of how the proposed selection model is implemented.


1991 ◽  
Vol 23 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Donald A. Berry ◽  
Robert P. Kertz

For k-armed Bernoulli bandits with discounting, sharp comparisons are given between average optimal rewards for a gambler and for a ‘perfectly informed' gambler, over natural collections of prior distributions. Some of these comparisons are proved under general discounting, and others under non-increasing discount sequences. Connections are made between these comparisons and the concept of ‘regret' in the minimax approach to bandit processes. Identification of extremal cases in the sharp comparisons is emphasized.


Sign in / Sign up

Export Citation Format

Share Document