scholarly journals Inequality Indices as Tests of Fairness

2019 ◽  
Vol 129 (621) ◽  
pp. 2216-2239 ◽  
Author(s):  
Ravi Kanbur ◽  
Andy Snell

Abstract Inequality indices are traditionally interpreted as measures of deviations from equality. This article interprets them instead as statistical tests for a null of fairness within well-defined income generating processes. We find that the likelihood ratio (LR) test for fairness versus unfairness within two such processes are proportional to Theil’s first and second inequality indices respectively. The LR values may be used either as a test statistic or to approximate a Bayes factor that measures the posterior probabilities of the fair version of the processes over that of the unfair. We also apply this perspective to measurement of inequality of opportunity.

Author(s):  
Kristof Bosmans ◽  
Z. Emel Öztürk

AbstractWe develop a normative approach to the measurement of inequality of opportunity. That is, we measure inequality of opportunity by the welfare gain obtained in moving from the actual income distribution to the optimal income distribution of the total available income. Our study brings together the main approaches in the literature: we axiomatically characterize social welfare functions, we obtain prominent allocation rules as their optima, and we derive familiar classes of inequality of opportunity measures. Our analysis captures moreover the key philosophical distinctions in the literature: ex post versus ex ante compensation, and liberal versus utilitarian reward.


Author(s):  
Markus Ekvall ◽  
Michael Höhle ◽  
Lukas Käll

Abstract Motivation Permutation tests offer a straightforward framework to assess the significance of differences in sample statistics. A significant advantage of permutation tests are the relatively few assumptions about the distribution of the test statistic are needed, as they rely on the assumption of exchangeability of the group labels. They have great value, as they allow a sensitivity analysis to determine the extent to which the assumed broad sample distribution of the test statistic applies. However, in this situation, permutation tests are rarely applied because the running time of naïve implementations is too slow and grows exponentially with the sample size. Nevertheless, continued development in the 1980s introduced dynamic programming algorithms that compute exact permutation tests in polynomial time. Albeit this significant running time reduction, the exact test has not yet become one of the predominant statistical tests for medium sample size. Here, we propose a computational parallelization of one such dynamic programming-based permutation test, the Green algorithm, which makes the permutation test more attractive. Results Parallelization of the Green algorithm was found possible by non-trivial rearrangement of the structure of the algorithm. A speed-up—by orders of magnitude—is achievable by executing the parallelized algorithm on a GPU. We demonstrate that the execution time essentially becomes a non-issue for sample sizes, even as high as hundreds of samples. This improvement makes our method an attractive alternative to, e.g. the widely used asymptotic Mann-Whitney U-test. Availabilityand implementation In Python 3 code from the GitHub repository https://github.com/statisticalbiotechnology/parallelPermutationTest under an Apache 2.0 license. Supplementary information Supplementary data are available at Bioinformatics online.


2018 ◽  
Vol 1 (2) ◽  
pp. 281-295 ◽  
Author(s):  
Alexander Etz ◽  
Julia M. Haaf ◽  
Jeffrey N. Rouder ◽  
Joachim Vandekerckhove

Hypothesis testing is a special form of model selection. Once a pair of competing models is fully defined, their definition immediately leads to a measure of how strongly each model supports the data. The ratio of their support is often called the likelihood ratio or the Bayes factor. Critical in the model-selection endeavor is the specification of the models. In the case of hypothesis testing, it is of the greatest importance that the researcher specify exactly what is meant by a “null” hypothesis as well as the alternative to which it is contrasted, and that these are suitable instantiations of theoretical positions. Here, we provide an overview of different instantiations of null and alternative hypotheses that can be useful in practice, but in all cases the inferential procedure is based on the same underlying method of likelihood comparison. An associated app can be found at https://osf.io/mvp53/ . This article is the work of the authors and is reformatted from the original, which was published under a CC-By Attribution 4.0 International license and is available at https://psyarxiv.com/wmf3r/ .


Sign in / Sign up

Export Citation Format

Share Document