distributional assumption
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 2)

H-INDEX

7
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Kiyofumi Miyoshi ◽  
Yosuke Sakamoto ◽  
Shin'ya Nishida

Theory of visual confidence has largely been grounded in the gaussian signal detection framework. This framework is so dominant that people could be rather ignorant of idiosyncratic consequences from this distributional assumption. By contrasting gaussian and logistic signal detection models, this paper systematically evaluates the consequences of auxiliary distributional assumptions in the measurement of metacognitive accuracy and its theoretical implications. We found that these models can lead to opposing conclusions regarding the efficiency of confidence rating relative to objective decision (whether meta-d’ is larger or smaller than d’) as well as the metacognitive efficiency along the internal evidence continuum (whether meta-d’ is larger or smaller for higher levels of confidence). These demonstrations may call for reconsideration of hitherto established theories of metacognition that are critically dependent on auxiliary modeling assumptions. We deem there is no instant solution for this matter as our quantitative model comparisons on a large dataset did not decide on a clear victor between gaussian and logistic metacognitive models. Yet, being aware of the hidden modeling assumptions and their systematic consequences would facilitate cumulative development of the science of metacognition.


2021 ◽  
Vol 18 ◽  
pp. 1380-1388
Author(s):  
Tirngo Dinku ◽  
Worku Gardachw ◽  
Ngozi Adeleye

This study models the volatility of returns for selected agricultural commodity prices in Ethiopia using the generalized autoregressive conditional heteroskedasticity (GARCH) approach. GARCH family models, specifically threshold GARCH and exponential GARCH were employed to analyze the time varying volatility of selected agricultural commodities prices from 2010 to 2021. The data analysis results revealed that, out of the GARCH specifications, the EGARCH model with the normal distributional assumption of residuals was a better fit model for the price volatility of “teff” and “red pepper” in which their return series reacted differently to the “good” and “bad” news. The study indicated the existence of a leverage effect, which implied that the “bad” news could have a larger effect on volatility than the “good” news of the same magnitude, and the asymmetric term was statistically significant.


2020 ◽  
pp. 107699862095724
Author(s):  
Renske E. Kuijpers ◽  
Ingmar Visser ◽  
Dylan Molenaar

Mixture models have been developed to enable detection of within-subject differences in responses and response times to psychometric test items. To enable mixture modeling of both responses and response times, a distributional assumption is needed for the within-state response time distribution. Since violations of the assumed response time distribution may bias the modeling results, choosing an appropriate within-state distribution is important. However, testing this distributional assumption is challenging as the latent within-state response time distribution is by definition different from the observed distribution. Therefore, existing tests on the observed distribution cannot be used. In this article, we propose statistical tests on the within-state response time distribution in a mixture modeling framework for responses and response times. We investigate the viability of the newly proposed tests in a simulation study, and we apply the test to a real data set.


2020 ◽  
Vol 40 (1) ◽  
pp. 7-26 ◽  
Author(s):  
Pankaj C. Patel ◽  
Cong Feng

A lesbian, gay, bisexual, and transgender workplace equality policy (LGBT-WEP) helps signal and reinforce the organizational commitment to workplace equality and diversity. Prior evidence suggests that LGBT-WEP is viewed favorably by stakeholders (customers, employees, and channel partners) and influences firm performance. Drawing on stakeholder theory and the resource-based view of the firm, the authors examine whether LGBT-WEP influences customer satisfaction through marketing capability and whether demand instability dampens these associations. To alleviate endogeneity concerns of LGBT-WEP, they exploit the plausibly exogenous state-to-state variations in workplace equality policies determined by statewide laws on nondiscrimination based on sexual orientation. Empirical results indicate that LGBT-WEP positively influences customer satisfaction both directly and through enhanced marketing capability. Demand instability, however, dampens these associations. Additional analyses with alternate measures of key variables, alternate distributional assumption, and alternate model specifications yield consistent results.


2020 ◽  
Vol 43 (2) ◽  
pp. 315-344
Author(s):  
Luis Carlos Bravo Melo ◽  
Jennyfer Portilla Yela ◽  
José Rafael Tovar Cuevas

When performing validation studies on diagnostic classification procedures, one or more biomarkers are typically measured in individuals. Some of these biomarkers may provide better information; moreover, more than one biomarker may be significant and may exhibit dependence between them. This proposal intends to estimate the Area Under the Receiver Operating  Characteristic Curve  (AUC)  for classifying individuals in a screening study. We analyze the dependence between the results of the tests by means of copula-type dependence (using FGM and Gumbel-Barnett copula functions), and studying the respective AUC under this type of dependence. Three different dependence-level values were evaluated for each copula function considered. In most of the reviewed literature, the authors assume a normal model to represent the performance of the biomarkers used for clinical diagnosis. There are situations in which assuming normality is not possible because that model is not suitable for one or both biomarkers. The proposed statistical model does not depend on some distributional assumption for the biomarkers used for diagnosis procedure, and additionally, it is not necessary to observe a strong or moderate linear dependence between them.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 719
Author(s):  
Audrius Kabašinskas ◽  
Kristina Šutienė ◽  
Miloš Kopa ◽  
Kęstutis Lukšys ◽  
Kazimieras Bagdonas

The pension landscape is changing due to the market situation, and technological change has enabled financial innovations. Pension savers usually seek financial advice to make a personalised decision in selecting the right pension fund for them. As such, decision rules based on the assumed risk profile of the decision maker could be generated by making use of stochastic dominance (SD). In the paper, the second-pillar pension funds operating in Lithuania and Slovakia are analysed according to SD rules. The importance of the distributional assumption is explored while comparing the results of empirical, student-t, Hyperbolic and Normal Inverse Gaussian distributions to generate SD-based rules that could be integrated into an advisory solution. Moreover, due to the differences in SD results under different distributional assumptions, a new SD ratio is proposed that condenses the dominance-based relations for all considered dominance orders and probability distributions. The empirical results indicate that this new SD ratio efficiently characterises not only the preference of each fund individually but also of a group of funds with the same attributes, thus enabling multi-risk and multi-country comparisons.


2020 ◽  
Author(s):  
Robert Malte Polzin ◽  
Annette Müller ◽  
Peter Nevir ◽  
Henning Rust ◽  
Peter Koltai

<p>The presented work contains an investigation of the stochastic aggregation of convective structures on different scales in the atmosphere. A<br>computational framework is applied that provides highly scalable identification of reduced Bayesian models. The deterministic large scale<br>flow variables are reduced into latent states, whereas the stochastic small scale convective structures are affiliated to these. The analysis of<br>the latent states in number and maximization reduction improves the understanding for the large scale forcing of convective processes. The<br>convective structures are determined by vertical velocities. Different variables of the large-scale flow, such as the convective available<br>potential energy, available moisture, vertical windshear and the Dynamic State Index (DSI), a diabaticity indicator, are investigated. Our approach<br>does not require a distributional assumption but works instead with a discretised and categorised state vector.</p>


2020 ◽  
Vol 45 (4) ◽  
pp. 475-506 ◽  
Author(s):  
Soojin Park ◽  
Gregory J. Palardy

Estimating the effects of randomized experiments and, by extension, their mediating mechanisms, is often complicated by treatment noncompliance. Two estimation methods for causal mediation in the presence of noncompliance have recently been proposed, the instrumental variable method (IV-mediate) and maximum likelihood method (ML-mediate). However, little research has examined their performance when certain assumptions are violated and under varying data conditions. This article addresses that gap in the research and compares the performance of the two methods. The results show that the distributional assumption of the compliance behavior plays an important role in estimation. That is, regardless of the estimation method or whether the other assumptions hold, results are biased if the distributional assumption is not met. We also found that the IV-mediate method is more sensitive to exclusion restriction violations, while the ML-mediate method is more sensitive to monotonicity violations. Moreover, estimates depend in part on compliance rate, sample size, and the availability and impact of control covariates. These findings are used to provide guidance on estimator selection.


2019 ◽  
Vol 86 (12) ◽  
pp. 773-783 ◽  
Author(s):  
Katy Klauenberg ◽  
Clemens Elster

AbstractIn metrology, the normal distribution is often taken for granted, e. g. when evaluating the result of a measurement and its uncertainty, or when establishing the equivalence of measurements in key or supplementary comparisons. The correctness of this inference and subsequent conclusions is dependent on the normality assumption, such that a validation of this assumption is essential. Hypothesis testing is the formal statistical framework to do so, and this introduction will describe how statistical tests detect violations of a distributional assumption.In the metrological context we will advise on how to select such a hypothesis test, how to set it up, how to perform it and which conclusion(s) can be drawn. In addition, we calculate the number of measurements needed to decide whether a process departs from a normal distribution and quantify how sure one is about this decision then. These aspects are illustrated for the powerful Shapiro-Wilk test and by an example in legal metrology. For this application we recommend to perform 330 measurements. Briefly we also touch upon the issues of multiple testing and rounded measurements.


Sign in / Sign up

Export Citation Format

Share Document