scholarly journals Comparison of liquidity based and financial performance based indicators in financial analysis

2017 ◽  
Vol 8 (1) ◽  
pp. 83 ◽  
Author(s):  
Igor Pustylnick

Research background: Since the turn of the 21st century financial statement manipulations became the center of attention for accountants, auditors and financial analysts. Since being classified by the regulators as fraudulent, earnings management has required a separate detection methodology. The majority of detection research is performed through the comparison of a large number of statements for the same company in order to find irregularities in earnings behavior. Shortening of the detection time and the amount of data becomes important. Purpose of the article: The goal was to compare the characteristics of M-Score and ∆P-∆R and to find their advantages and limitations. Applying both indicators to the different samples, the research attempted to determine the statistical connection between them and to set up the limits of their applicability. Since M-Score indicator is liquidity-based, this research attempted to determine to which extent M-Score and Z-Score are statistically related. Methods: The research paper compares the behavior of both indicators using various samples of financial data: the sample of companies, charged with fraud, the sample with exceptional liquidity, the large random sample and the sample from the emerging market economy. Based on the original observations, two other subsamples (one based on poor Z-Score and one based on exceptional Z-Score) were extracted from the main sample. For all samples ∆P-∆R, M-Score and Z-Score were statistically compared among and between themselves. Findings and value added: The research found the limitations of ∆P-∆R and M-Score in the stable markets and was able to connect them in the emerging market by using linear regression model (also including Z-Score). The research confirmed that M-Score can mistake exceptional performance for manipulations, resulted in Type I errors. ∆P-∆R appeared somewhat coarse and prone to Type II errors. The combined use of both in the emerging markets will provide the best approach.

2018 ◽  
Vol 7 (10) ◽  
pp. 409 ◽  
Author(s):  
Youqiang Dong ◽  
Ximin Cui ◽  
Li Zhang ◽  
Haibin Ai

The progressive TIN (triangular irregular network) densification (PTD) filter algorithm is widely used for filtering point clouds. In the PTD algorithm, the iterative densification parameters become smaller over the entire process of filtering. This leads to the performance—especially the type I errors of the PTD algorithm—being poor for point clouds with high density and standard variance. Hence, an improved PTD filtering algorithm for point clouds with high density and variance is proposed in this paper. This improved PTD method divides the iterative densification process into two stages. In the first stage, the iterative densification process of the PTD algorithm is used, and the two densification parameters become smaller. When the density of points belonging to the TIN is higher than a certain value (in this paper, we define this density as the standard variance intervention density), the iterative densification process moves into the second stage. In the second stage, a new iterative densification strategy based on multi-scales is proposed, and the angle threshold becomes larger. The experimental results show that the improved PTD algorithm can effectively reduce the type I errors and total errors of the DIM point clouds by 7.53% and 4.09%, respectively, compared with the PTD algorithm. Although the type II errors increase slightly in our improved method, the wrongly added objective points have little effect on the accuracy of the generated DSM. In short, our improved PTD method perfects the classical PTD method and offers a better solution for filtering point clouds with high density and standard variance.


2019 ◽  
Vol 8 (4) ◽  
pp. 1849-1853

Nowadays people are interested to avail loans in banks for their needs, but providing loans to all people is not possible to banks, so they are using some measures to identify eligible customers. To measure the performance of categorical variables sensitivity and specificity are widely used in Medical and tangentially in econometrics, after using some measures also if banks provide the loans to the wrong customers whom might not able to repay the loans, and not providing to customers who can repay will lead to the type I errors and type II errors, to minimize these errors, this study explains one, how to know sensitivity is large or small and second to study the bench marks on forecasting the model by Fuzzy analysis based on fuzzy based weights and it is compared with the sensitivity analysis.


2019 ◽  
Vol 100 (10) ◽  
pp. 1987-2007 ◽  
Author(s):  
Thomas Knutson ◽  
Suzana J. Camargo ◽  
Johnny C. L. Chan ◽  
Kerry Emanuel ◽  
Chang-Hoi Ho ◽  
...  

AbstractAn assessment was made of whether detectable changes in tropical cyclone (TC) activity are identifiable in observations and whether any changes can be attributed to anthropogenic climate change. Overall, historical data suggest detectable TC activity changes in some regions associated with TC track changes, while data quality and quantity issues create greater challenges for analyses based on TC intensity and frequency. A number of specific published conclusions (case studies) about possible detectable anthropogenic influence on TCs were assessed using the conventional approach of preferentially avoiding type I errors (i.e., overstating anthropogenic influence or detection). We conclude there is at least low to medium confidence that the observed poleward migration of the latitude of maximum intensity in the western North Pacific is detectable, or highly unusual compared to expected natural variability. Opinion on the author team was divided on whether any observed TC changes demonstrate discernible anthropogenic influence, or whether any other observed changes represent detectable changes. The issue was then reframed by assessing evidence for detectable anthropogenic influence while seeking to reduce the chance of type II errors (i.e., missing or understating anthropogenic influence or detection). For this purpose, we used a much weaker “balance of evidence” criterion for assessment. This leads to a number of more speculative TC detection and/or attribution statements, which we recognize have substantial potential for being false alarms (i.e., overstating anthropogenic influence or detection) but which may be useful for risk assessment. Several examples of these alternative statements, derived using this approach, are presented in the report.


1990 ◽  
Vol 15 (3) ◽  
pp. 237-247 ◽  
Author(s):  
Rand R. Wilcox

Let X and Y be dependent random variables with variances σ2x and σ2y. Recently, McCulloch (1987) suggested a modification of the Morgan-Pitman test of Ho: σ2x=σ2y But, as this paper describes, there are situations where McCulloch’s procedure is not robust. A subsample approach, similar to the Box-Scheffe test, is also considered and found to give conservative results, in terms of Type I errors, for all situations considered, but it yields relatively low power. New results on the Sandvik-Olsson procedure are also described, but the procedure is found to be nonrobust in situations not previously considered, and its power can be low relative to the two other techniques considered here. A modification of the Morgan-Pitman test based on the modified maximum likelihood estimate of a correlation is also considered. This last procedure appears to be robust in situations where the Sandvik-Olsson (1982) and McCulloch procedures are robust, and it can have more power than the Sandvik-Olsson. But it too gives unsatisfactory results in certain situations. Thus, in terms of power, McCulloch’s procedure is found to be best, with the advantage of being simple to use. But, it is concluded that, in terms of controlling both Type I and Type II errors, a satisfactory solution does not yet exist.


1993 ◽  
Vol 76 (2) ◽  
pp. 407-412 ◽  
Author(s):  
Donald W. Zimmerman

This study investigated violations of random sampling and random assignment in data analyzed by nonparametric significance tests. A computer program induced correlations within groups, as well as between groups, and performed one-sample and two-sample versions of the Mann-Whitney-Wilcoxon test on the resulting scores. Nonindependence of observations within groups spuriously inflated the probability of Type I errors and depressed the probability of Type II errors, and nonindependence between groups had the reverse effect. This outcome, which parallels the influence of nonindependence on parametric tests, can be explained by the equivalence of the Mann-Whitney-Wilcoxon test and the Student t test performed on ranks replacing the initial scores.


2021 ◽  
Author(s):  
Antonia Vehlen ◽  
William Standard ◽  
Gregor Domes

Advances in eye tracking technology have enabled the development of interactive experimental setups to study social attention. Since these setups differ substantially from the eye tracker manufacturer’s test conditions, validation is essential with regard to data quality and other factors potentially threatening data validity. In this study, we evaluated the impact of data accuracy and areas of interest (AOIs) size on the classification of simulated gaze data. We defined AOIs of different sizes using the Limited-Radius Voronoi-Tessellation (LRVT) method, and simulated gaze data for facial target points with varying data accuracy. As hypothesized, we found that data accuracy and AOI size had strong effects on gaze classification. In addition, these effects were not independent and differed for falsely classified gaze inside AOIs (Type I errors) and falsely classified gaze outside the predefined AOIs (Type II errors). The results indicate that smaller AOIs generally minimize false classifications as long as data accuracy is good enough. For studies with lower data accuracy, Type II errors can still be compensated to some extent by using larger AOIs, but at the cost of an increased probability of Type I errors. Proper estimation of data accuracy is therefore essential for making informed decisions regarding the size of AOIs.


2017 ◽  
Author(s):  
Torrin Liddell ◽  
John K. Kruschke

We surveyed all articles in the Journal of Personality and Social Psychology (JPSP), Psychological Science (PS), and the Journal of Experimental Psychology: General (JEP:G) that mentioned the term "Likert," and found that 100% of the articles that analyzed ordinal data did so using a metric model. We present novel evidence that analyzing ordinal data as if they were metric can systematically lead to errors. We demonstrate false alarms (i.e., detecting an effect where none exists, Type~I errors) and failures to detect effects (i.e., loss of power, Type II errors). We demonstrate systematic inversions of effects, for which treating ordinal data as metric indicates the opposite ordering of means than the true ordering of means. We show the same problems --- false alarms, misses, and inversions --- for interactions in factorial designs and for trend analyses in regression. We demonstrate that averaging across multiple ordinal measurements does not solve or even ameliorate these problems. We provide simple graphical explanations of why these mistakes occur. Moreover, we point out that there is no sure-fire way to detect these problems by treating the ordinal values as metric, and instead we advocate use of ordered-probit models (or similar) because they will better describe the data. Finally, although frequentist approaches to some ordered-probit models are available, we use Bayesian methods because of their flexibility in specifying models and their richness and accuracy in providing parameter estimates.


1992 ◽  
Vol 75 (3) ◽  
pp. 1011-1020 ◽  
Author(s):  
Donald W. Zimmerman ◽  
Richard H. Williams ◽  
Bruno D. Zumbo

A computer-simulation study examined the one-sample Student t test under violation of the assumption of independent sample observations. The probability of Type I errors increased, and the probability of Type II errors decreased, spuriously elevating the entire power function. The magnitude of the change depended on the correlation between pairs of sample values as well as the number of sample values that were pairwise correlated. A modified t statistic, derived from an unbiased estimate of the population variance that assumed only exchangeable random variables instead of independent, identically distributed random variables, effectively corrected for nonindependence for all degrees of correlation and restored the probability of Type I and Type II errors to their usual values.


2009 ◽  
Vol 84 (5) ◽  
pp. 1395-1428 ◽  
Author(s):  
Joseph V. Carcello ◽  
Ann Vanstraelen ◽  
Michael Willenborg

ABSTRACT: We study going-concern (GC) reporting in Belgium to examine the effects associated with a shift toward rules-based audit standards. Beginning in 2000, a major revision in Belgian GC audit standards took effect. Among its changes, auditors must ascertain whether their clients are in compliance with two “financial-juridical criteria” for board of directors' GC disclosures. In a study of a sample of private Belgian companies, we report two major findings. First, there is a decrease in auditor Type II errors, particularly by non-Big 6/5 auditors for their clients that fail both criteria. Second, there is an increase in Type I errors, again particularly for companies that fail both criteria. We also conduct an ex post analysis of the decrease in Type II errors and the increase in Type I errors. Our findings suggest the standard engenders both favorable and unfavorable effects, the net of which depends on the priorities assigned to the affected parties (creditors, auditors, companies, and employees).


Author(s):  
D. Brynn Hibbert ◽  
J. Justin Gooding

• To understand the concept of the null hypothesis and the role of Type I and Type II errors. • To test that data are normally distributed and whether a datum is an outlier. • To determine whether there is systematic error in the mean of measurement results. • To perform tests to compare the means of two sets of data.… One of the uses to which data analysis is put is to answer questions about the data, or about the system that the data describes. In the former category are ‘‘is the data normally distributed?’’ and ‘‘are there any outliers in the data?’’ (see the discussions in chapter 1). Questions about the system might be ‘‘is the level of alcohol in the suspect’s blood greater than 0.05 g/100 mL?’’ or ‘‘does the new sensor give the same results as the traditional method?’’ In answering these questions we determine the probability of finding the data given the truth of a stated hypothesis—hence ‘‘hypothesis testing.’’ A hypothesis is a statement that might, or might not, be true. Usually the hypothesis is set up in such a way that it is possible to calculate the probability (P) of the data (or the test statistic calculated from the data) given the hypothesis, and then to make a decision about whether the hypothesis is to be accepted (high P) or rejected (low P). A particular case of a hypothesis test is one that determines whether or not the difference between two values is significant—a significance test. For this case we actually put forward the hypothesis that there is no real difference and the observed difference arises from random effects: it is called the null hypothesis (H<sub>0</sub>). If the probability that the data are consistent with the null hypothesis falls below a predetermined low value (say 0.05 or 0.01), then the hypothesis is rejected at that probability. Therefore, p<0.05 means that if the null hypothesis were true we would find the observed data (or more accurately the value of the statistic, or greater, calculated from the data) in less than 5% of repeated experiments.


Sign in / Sign up

Export Citation Format

Share Document