statistical significance test
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 0)

H-INDEX

7
(FIVE YEARS 0)

2018 ◽  
Vol 23 (2) ◽  
pp. 385-395 ◽  
Author(s):  
Jan Dul ◽  
Erwin van der Laan ◽  
Roelof Kuik

In this article, we present a statistical significance test for necessary conditions. This is an elaboration of necessary condition analysis (NCA), which is a data analysis approach that estimates the necessity effect size of a condition X for an outcome Y. NCA puts a ceiling on the data, representing the level of X that is necessary (but not sufficient) for a given level of Y. The empty space above the ceiling relative to the total empirical space characterizes the necessity effect size. We propose a statistical significance test that evaluates the evidence against the null hypothesis of an effect being due to chance. Such a randomness test helps protect researchers from making Type 1 errors and drawing false positive conclusions. The test is an “approximate permutation test.” The test is available in NCA software for R. We provide suggestions for further statistical development of NCA.


2017 ◽  
Vol 146 (1) ◽  
pp. 63-75 ◽  
Author(s):  
Gregor Skok ◽  
Veronika Hladnik

Abstract A novel wind verification methodology is presented and analyzed for six surface wind cases in the greater Alpine region as well as an idealized setup. The methodology is based on the idea of the fractions skill score, a neighborhood-based spatial verification metric frequently used for verifying precipitation. The new score avoids the problems of traditional nonspatial verification metrics (the “double penalty” problem and the failure to distinguish between a “near miss” and much poorer forecasts) and can distinguish forecasts even when the spatial displacement of wind patterns is large. Moreover, the time-averaged score value in combination with a statistical significance test enables different wind forecasts to be ranked by their performance.


2017 ◽  
Vol 2 (1) ◽  
Author(s):  
Ayodele Oloyede ◽  
Temitayo Matthew Fagbola ◽  
Stephen Olabiyisi ◽  
Elijah Omidiora ◽  
John Oladosu

Large variation in facial appearances of the same individual makes most baseline Aging-Invariant Face Recognition Systems (AI-FRS) suffer from high automatic misclassification of faces. However, some Aging-Invariant Feature Extraction Techniques (AI-FET) for AI-FRS are emerging to help achieve good recognition results when compared to some baseline transformations in conditions with large amount of variations in facial texture and shape. However, the performance results of these AI-FET need to be further investigated statistically to avoid being misled. Statistical significance test can be used to logically justify such performance claims. The statistical significance test would serve as a decision rule to determine the degree of acceptability of the probability to make a wrong decision should such performance claims be found faulty. In this paper, the means between the quantitative results of emerging AI-FET (Histogram of Gradient (HoG), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA) and Local Binary Pattern-Gabor Wavelet Transform (LBP-GWT)) and the baseline aging-invariant techniques (Local Binary Pattern (LBP) and Gabor Wavelet Transform (GWT)) were computed and compared to determine if those means are statistically significantly different from each other using one-way Analysis of Variance (ANOVA). The ANOVA results obtained at 0.05 critical significance level indicate that the results of the emerging AI-FET techniques are not statistically significantly different from those of baseline techniques because the F-critical value was found to be greater than the value of the calculated F-statistics in all the evaluations conducted.


2012 ◽  
Vol 11 (1) ◽  
pp. 64
Author(s):  
Sri Rezeki ◽  
Subanar Subanar ◽  
Suryo Guritno

Model selection in neural networks can be guided by statistical procedures, such as hypothesis tests, informationcriteria and cross validation. Taking a statistical perspective is especially important for nonparametric models likeneural networks, because the reason for applying them is the lack of knowledge about an adequate functionalform. Many researchers have developed model selection strategies for neural networks which are based onstatistical concepts. In this paper, we focused on the model evaluation by implementing statistical significancetest. We used Wald-test to evaluate the relevance of parameters in the networks for classification problem.Parameters with no significance influence on any of the network outputs have to be removed. In general, theresults show that Wald-test work properly to determine significance of each weight from the selected model. Anempirical study by using Iris data yields all parameters in the network are significance, except bias at the firstoutput neuron.


Sign in / Sign up

Export Citation Format

Share Document