scholarly journals Uniformly best constant risk and minimax point estimates

1952 ◽  
Vol 48 (1) ◽  
pp. 49
Author(s):  
R.P. Peterson
2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Bing He ◽  
Yong Hong ◽  
Zhen Li

AbstractFor the Hilbert type multiple integral inequality $$ \int _{\mathbb{R}_{+}^{n}} \int _{\mathbb{R}_{+}^{m}} K\bigl( \Vert x \Vert _{m,\rho }, \Vert y \Vert _{n, \rho }\bigr) f(x)g(y) \,\mathrm{d} x \,\mathrm{d} y \leq M \Vert f \Vert _{p, \alpha } \Vert g \Vert _{q, \beta } $$ ∫ R + n ∫ R + m K ( ∥ x ∥ m , ρ , ∥ y ∥ n , ρ ) f ( x ) g ( y ) d x d y ≤ M ∥ f ∥ p , α ∥ g ∥ q , β with a nonhomogeneous kernel $K(\|x\|_{m, \rho }, \|y\|_{n, \rho })=G(\|x\|^{\lambda _{1}}_{m, \rho }/ \|y\|^{\lambda _{2}}_{n, \rho })$ K ( ∥ x ∥ m , ρ , ∥ y ∥ n , ρ ) = G ( ∥ x ∥ m , ρ λ 1 / ∥ y ∥ n , ρ λ 2 ) ($\lambda _{1}\lambda _{2}> 0$ λ 1 λ 2 > 0 ), in this paper, by using the weight function method, necessary and sufficient conditions that parameters p, q, $\lambda _{1}$ λ 1 , $\lambda _{2}$ λ 2 , α, β, m, and n should satisfy to make the inequality hold for some constant M are established, and the expression formula of the best constant factor is also obtained. Finally, their applications in operator boundedness and operator norm are also considered, and the norms of several integral operators are discussed.


2021 ◽  
Vol 9 (1) ◽  
pp. 11
Author(s):  
Alex Garivaltis

This note provides a neat and enjoyable expansion and application of the magnificent Ordentlich-Cover theory of “universal portfolios”. I generalize Cover’s benchmark of the best constant-rebalanced portfolio (or 1-linear trading strategy) in hindsight by considering the best bilinear trading strategy determined in hindsight for the realized sequence of asset prices. A bilinear trading strategy is a mini two-period active strategy whose final capital growth factor is linear separately in each period’s gross return vector for the asset market. I apply Thomas Cover’s ingenious performance-weighted averaging technique to construct a universal bilinear portfolio that is guaranteed (uniformly for all possible market behavior) to compound its money at the same asymptotic rate as the best bilinear trading strategy in hindsight. Thus, the universal bilinear portfolio asymptotically dominates the original (1-linear) universal portfolio in the same technical sense that Cover’s universal portfolios asymptotically dominate all constant-rebalanced portfolios and all buy-and-hold strategies. In fact, like so many Russian dolls, one can get carried away and use these ideas to construct an endless hierarchy of ever more dominant H-linear universal portfolios.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Author(s):  
Rupert L. Frank ◽  
David Gontier ◽  
Mathieu Lewin

AbstractIn this paper we disprove part of a conjecture of Lieb and Thirring concerning the best constant in their eponymous inequality. We prove that the best Lieb–Thirring constant when the eigenvalues of a Schrödinger operator $$-\Delta +V(x)$$ - Δ + V ( x ) are raised to the power $$\kappa $$ κ is never given by the one-bound state case when $$\kappa >\max (0,2-d/2)$$ κ > max ( 0 , 2 - d / 2 ) in space dimension $$d\ge 1$$ d ≥ 1 . When in addition $$\kappa \ge 1$$ κ ≥ 1 we prove that this best constant is never attained for a potential having finitely many eigenvalues. The method to obtain the first result is to carefully compute the exponentially small interaction between two Gagliardo–Nirenberg optimisers placed far away. For the second result, we study the dual version of the Lieb–Thirring inequality, in the same spirit as in Part I of this work Gontier et al. (The nonlinear Schrödinger equation for orthonormal functions I. Existence of ground states. Arch. Rat. Mech. Anal, 2021. https://doi.org/10.1007/s00205-021-01634-7). In a different but related direction, we also show that the cubic nonlinear Schrödinger equation admits no orthonormal ground state in 1D, for more than one function.


Author(s):  
Kuen-Suan Chen ◽  
Tsang-Chuan Chang ◽  
Yun-Tsan Lin

In the face of fierce global competition, firms are outsourcing important but nonessential tasks to external professional companies. Corporations are also turning from competitive business models to cooperative strategic partnerships in hopes of swiftly responding to consumer needs and enhancing overall efficiency and industry competitiveness. This research developed an outsourcing partner selection model in hopes of helping firms select better outsourcing partners for long-term collaborations. Process quality and manufacturing time are vital when evaluating outsourcing partner. We therefore used process capability index [Formula: see text] and manufacturing time performance index [Formula: see text] in the proposed model. Sample data from random samples are needed to calculate the point estimates of indices, however, it is impossible to obtain a sample with a structure completely identical to that of the population, which means that sampling generates unavoidable sampling errors. The reliability of point estimates are also uncertain, which inevitably leads to misjudgment in some cases. Thus, to reduce estimate errors and increase assessment reliability, we calculated the [Formula: see text]% confidence intervals of the indices [Formula: see text] and [Formula: see text], then constructed the joint confidence region of [Formula: see text] and [Formula: see text] to develop an outsourcing partner selection model that will help firms select better outsourcing partners for long-term collaborations. We also provide a case as an illustration of how the proposed selection model is implemented.


Sign in / Sign up

Export Citation Format

Share Document