scholarly journals Using a Computational Approach for Generalizing a Consensus Measure to Likert Scales of Any Size n

Author(s):  
Mushtaq Abdal Rahem ◽  
Marjorie Darrah

There are many consensus measures that can be computed using Likert data. Although these measures should work with any number n of choices on the Likert scale, the measurements have been most widely studied and demonstrated for n = 5. One measure of consensus introduced by Akiyama et al. for n = 5 and theoretically generalized to all n depends on both the mean and variance and gives results that can differentiate between some group consensus behavior patterns better than other measures that rely on either just the mean or just the variance separately. However, this measure is more complicated and not easy to apply and understand. This paper addresses these two common problems by introducing a new computational method to find the measure of consensus that works for any number of Likert item choices. The novelty of the approach is that it uses computational methods in n-dimensional space. Numerical examples in three-dimensional (for n=6) and four-dimensional (for n=7) spaces are provided in this paper to assure the agreement of the computational and theoretical approach outputs.

1984 ◽  
Vol 21 (4) ◽  
pp. 738-752 ◽  
Author(s):  
Peter Hall

Let n points be distributed independently within a k-dimensional unit cube according to density f. At each point, construct a k-dimensional sphere of content an. Let V denote the vacancy, or ‘volume' not covered by the spheres. We derive asymptotic formulae for the mean and variance of V, as n → ∞and an → 0. The formulae separate naturally into three cases, corresponding to nan → 0, nan → a (0 < a <∞) and nan →∞, respectively. We apply the formulae to derive necessary and sufficient conditions for V/E(V) → 1 in L2.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Ching-fu Shen ◽  
Jin-long Huang ◽  
Chin-san Lee

Interval censored (IC) failure time data are often observed in medical follow-up studies and clinical trials where subjects can only be followed periodically, and the failure time can only be known to lie in an interval. In this paper, we propose a weighted Wilcoxon-type rank test for the problem of comparing two IC samples. Under a very general sampling technique developed by Fay (1999), the mean and variance of the test statistics under the null hypothesis can be derived. Through simulation studies, we find that the performance of the proposed test is better than that of the two existing Wilcoxon-type rank tests proposed by Mantel (1967) and R. Peto and J. Peto (1972). The proposed test is illustrated by means of an example involving patients in AIDS cohort studies.


2007 ◽  
Vol 37 (6) ◽  
pp. 1714-1732 ◽  
Author(s):  
Trevor J. McDougall ◽  
David R. Jackett

Abstract It is shown that the ocean’s hydrography occupies little volume in the three-dimensional space defined by salinity–temperature–pressure (S–Θ–p), and the implications of this observation for the mean vertical transport across density surfaces are discussed. Although ocean data have frequently been analyzed in the two-dimensional temperature–salinity (S–Θ) diagram where casts of hydrographic data are often locally tight in S–Θ space, the relatively empty nature of the World Ocean in the three-dimensional S–Θ–p space seems not to have received attention. The World Ocean’s data lie close to a single surface in this three-dimensional space, and it is shown that this explains the known smallness of the ambiguity in defining neutral surfaces. The ill-defined nature of neutral surfaces means that lateral motion along neutral trajectories leads to mean vertical advection through density surfaces, even in the absence of small-scale mixing processes. The situation in which the ocean’s hydrography occupies a large volume in S–Θ–p space is also considered, and it is suggested that the consequent vertical diapycnal advection would be sufficiently large that the ocean would not be steady.


2021 ◽  
Vol 11 (3) ◽  
pp. 688-696
Author(s):  
Xiaojuan Hu ◽  
Zhaobang Liu ◽  
Xiaodong Yang ◽  
Jiatuo Xu ◽  
Liping Tu ◽  
...  

Background and Objective: The modernization of tongue diagnosis is an important research in Traditional Chinese Medicine. Accurate and practical tongue segmentation method is a premise in subsequent analyses. In this paper, an unsupervised tongue segmentation method is proposed based on an improved gPb-owt-ucm algorithm. The gPb-owt-ucm is short for global pixel point, oriented watershed transform and ultrametric contour map. Methods: Improved gPb-owt-ucm algorithm is adopted in this paper because of its powerful contour detection capabilities. The boundary feasibility of each pixel is calculated by the weight of pixel, and the result is converted to multiple closed regions and hierarchical tree. Finally, locating tongue accurate boundary by rectangular slider is taken to perform the final tongue segmentation. Two experiments are designed to evaluate its effectiveness by comparing with the snake method. Results: 300 tongue images were tested (150 images for the diabetes and 150 images for the health) in two experiments. The first one is to validate boundary detection performance (CBDR experiment). The second one is for validation of classification performance (CCE experiment) between diabetic and healthy tongues. In CBDR experiment, the mean and variance of IoU obtained using our improved gPb-owt-ucm method are 0.72±0.19, which are better than the snake method. In CCE experiment, the obtained precision and F1-score using our method are 1.0 and 0.97 over diabetic data respectively, and results of 0.94, 0.97 over health data. Conclusion: The effectiveness of our improved unsupervised gPb-owt-ucm method is validated in comparisons with the snake method. In the future, we plan to combine the proposed method with a supervised method in order to achieve more improvements for the tongue segmentation.


Author(s):  
Zoya O. Vyzhva

The estimator of the mean-square approximation of 3-D homogeneous and isotropic random field is investigated. The problem of statistical simulation of realizations of random fields in threedimensional space is considered. The algorithm for the receiving of this realization has been formulated, which has been constructed on the base the mean-square approximation of random fields estimator. It has been constructed the statistical model for the Gaussian random fields in three-dimensional space, which has been given by its statistical characteristics.


Perception ◽  
1977 ◽  
Vol 6 (3) ◽  
pp. 327-332 ◽  
Author(s):  
Raymond Klein

Four stereoblind and four normal subjects were tested on a mental rotation task. It was hypothesized that, if stereopsis is an important input for building up the perceptual system that represents three-dimensional space, then subjects lacking it ought to be deficient at mental rotations in depth. Stereoblind subjects were equally efficient at picture-plane and depth rotations, and were nonsignificantly better than normal subjects at rotations in depth. It was concluded that in the absence of stereopsis other cues are sufficient for the development of the ‘three-dimensional’ perceptual system. A puzzling paradox was raised, however, by the finding that the introspections of the two groups differed markedly.


2017 ◽  
Vol 14 (130) ◽  
pp. 20170031 ◽  
Author(s):  
Patrice Koehl

In this paper, we propose a new method for computing a distance between two shapes embedded in three-dimensional space. Instead of comparing directly the geometric properties of the two shapes, we measure the cost of deforming one of the two shapes into the other. The deformation is computed as the geodesic between the two shapes in the space of shapes. The geodesic is found as a minimizer of the Onsager–Machlup action, based on an elastic energy for shapes that we define. Its length is set to be the integral of the action along that path; it defines an intrinsic quasi-metric on the space of shapes. We illustrate applications of our method to geometric morphometrics using three datasets representing bones and teeth of primates. Experiments on these datasets show that the variational quasi-metric we have introduced performs remarkably well both in shape recognition and in identifying evolutionary patterns, with success rates similar to, and in some cases better than, those obtained by expert observers.


1984 ◽  
Vol 21 (04) ◽  
pp. 738-752
Author(s):  
Peter Hall

Let n points be distributed independently within a k-dimensional unit cube according to density f. At each point, construct a k-dimensional sphere of content an. Let V denote the vacancy, or ‘volume' not covered by the spheres. We derive asymptotic formulae for the mean and variance of V, as n → ∞and an → 0. The formulae separate naturally into three cases, corresponding to nan → 0, nan → a (0 &lt; a &lt;∞) and nan →∞, respectively. We apply the formulae to derive necessary and sufficient conditions for V/E(V) → 1 in L2.


2019 ◽  
Vol 12 (4) ◽  
Author(s):  
Xi Wang ◽  
Kenneth Holmqvist ◽  
Marc Alexa

The point of interest in three-dimensional space in eye tracking is often computed based on intersecting the lines of sight with geometry, or finding the point closest to the two lines of sight. We first start by theoretical analysis with synthetic simulations. We show that the mean point of vergence is generally biased for centrally symmetric errors and that the bias depends on the horizontal vs. vertical error distribution of the tracked eye positions. Our analysis continues with an evaluation on real experimental data. The error distributions seem to be different among individuals but they generally leads to the same bias towards the observer. And it tends to be larger with an increased viewing distance. We also provided a recipe to minimize the bias, which applies to general computations of eye ray intersection. These findings not only have implications for choosing the calibration method in eye tracking experiments and interpreting the observed eye movements data; but also suggest to us that we shall consider the mathematical models of calibration as part of the experiment.


2014 ◽  
Author(s):  
Madhur Mangalam ◽  
Nisarg Desai ◽  
Mewa Singh

Lateral asymmetries in body, brain, and cognition are ubiquitous among organisms. Asymmetries in motor-action patterns are a central theme of investigation, among others, as they are likely to have shaped primate evolution, and more specifically, their motor dexterity. Using an adaptationist approach one would argue that these asymmetries were evolutionarily selected because no bilateral organism can maneuver in three-dimensional space unless any one side becomes dominant and always takes the lead. However, which side becomes dominant is beyond the scope of this hypothesis as there is no apparent advantage or disadvantage associated with either the left or the right side. Both the evolutionary origin and adaptive significance of asymmetries in motor-action patterns remain largely unexplored. In the present study, we mathematically model how an asymmetry at a lower level could stimulate as well as govern asymmetries at the next higher level, and this process might reiterate; ultimately lateralizing the whole system. We then show by comparing two systems: one incorporating symmetric and the other incorporating asymmetric motor-action patterns, that (a) the asymmetric system performs better than the symmetric one in terms of time optimization, and (b) as the complexity of the task increases the advantage associated with asymmetries in the motor-action patterns increases. Our minimal model theoretically explains how lateral asymmetries could appear and evolve in a biological system using a systems theory approach.


Sign in / Sign up

Export Citation Format

Share Document