random experiment
Recently Published Documents


TOTAL DOCUMENTS

22
(FIVE YEARS 5)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Abdo Abou Jaoude

The concept of mathematical probability was established in 1933 by Andrey Nikolaevich Kolmogorov by defining a system of five axioms. This system can be enhanced to encompass the imaginary numbers set after the addition of three novel axioms. As a result, any random experiment can be executed in the complex probabilities set C which is the sum of the real probabilities set R and the imaginary probabilities set M. We aim here to incorporate supplementary imaginary dimensions to the random experiment occurring in the “real” laboratory in R and therefore to compute all the probabilities in the sets R, M, and C. Accordingly, the probability in the whole set C = R + M is constantly equivalent to one independently of the distribution of the input random variable in R, and subsequently the output of the stochastic experiment in R can be determined absolutely in C. This is the consequence of the fact that the probability in C is computed after the subtraction of the chaotic factor from the degree of our knowledge of the nondeterministic experiment. We will apply this innovative paradigm to Isaac Newton’s classical mechanics and to prove as well in an original way an important property at the foundation of statistical physics.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 958
Author(s):  
Maike Tormählen ◽  
Galiya Klinkova ◽  
Michael Grabinski

Statistical significance measures the reliability of a result obtained from a random experiment. We investigate the number of repetitions needed for a statistical result to have a certain significance. In the first step, we consider binomially distributed variables in the example of medication testing with fixed placebo efficacy, asking how many experiments are needed in order to achieve a significance of 95%. In the next step, we take the probability distribution of the placebo efficacy into account, which to the best of our knowledge has not been done so far. Depending on the specifics, we show that in order to obtain identical significance, it may be necessary to perform twice as many experiments than in a setting where the placebo distribution is neglected. We proceed by considering more general probability distributions and close with comments on some erroneous assumptions on probability distributions which lead, for instance, to a trivial explanation of the fat tail.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Haiyan Wang ◽  
Peidi Xu ◽  
Jinghua Zhao

The KNN algorithm is one of the most famous algorithms in machine learning and data mining. It does not preprocess the data before classification, which leads to longer time and more errors. To solve the problems, this paper first proposes a PK-means++ algorithm, which can better ensure the stability of a random experiment. Then, based on it and spherical region division, an improved KNNPK+ is proposed. The algorithm can select the center of the spherical region appropriately and then construct an initial classifier for the training set to improve the accuracy and time of classification.


Author(s):  
Maike Tormählen ◽  
Galiya Klinkova ◽  
Michael Grabinski

Statistical significance measures the reliability of a result obtained from a random experiment. We investigate the number of repetitions needed for a statistical result to have a certain significance. In the first step, we consider binomially distributed variables in the example of medication testing with fixed placebo efficacy, asking how many experiments are needed in order to achieve a significance of 95 %. In the next step, we take the probability distribution of the placebo efficacy into account, which to the best of our knowledge has not been done so far. Depending on the specifics, we show that in order to obtain identical significance, it may be necessary to perform twice as many experiments than in a setting where the placebo distribution is neglected. We proceed by considering more general probability distributions and close with comments on some erroneous assumptions on probability distributions which lead, for instance, to a trivial explanation of the fat tail.


Author(s):  
Guillaume Marois ◽  
Samir KC

AbstractThis chapter introduces the purpose of the book. When a researcher needs to perform microsimulation for population projections, building its own model with a common statistical software such as SAS might a good option, because this software is widely used among scholars and is taught in most social sciences departments. We define what is microsimulation: a modelling based on individual-level data rather than aggregated level data, in which transitions between the states are determined stochastically with a random experiment. We finally provide some examples of microsimulation models used by social scientists.


2019 ◽  
Vol 11 (2) ◽  
pp. 184-205
Author(s):  
G. Anjali ◽  
N. K. Sudev

Abstract Graph coloring can be considered as a random experiment with the color of a randomly selected vertex as the random variable. In this paper, we consider the L(2, 1)-coloring of G as the random experiment and we discuss the concept of two fundamental statistical parameters – mean and variance – with respect to the L(2, 1)-coloring of certain fundamental graph classes.


Author(s):  
Oleg Yu. Vorobyev

The aim of the paper is the axiomatic justification of the theory of experience and chance, one of the dual halves of which is the Kolmogorov probability theory. The author’s main idea was the natural inclusion of Kolmogorov’s axiomatics of probability theory in a number of general concepts of the theory of experience and chance. The main result of this work is the axiom of co~event, intended for the sake of constructing a theory formed by dual theories of believabilities and probabilities, each of which itself is postulated by its own Kolmogorov system of axioms. Of course, other systems of postulating the theory of experience and chance can be imagined, however, in this work a preference is given to a system of postulates that is able to describe in the most simple manner the results of what I call an experienced-random experiment


2018 ◽  
Vol 10 (03) ◽  
pp. 1850030
Author(s):  
N. K. Sudev ◽  
K. P. Chithra ◽  
K. A. Germina ◽  
S. Satheesh ◽  
Johan Kok

Coloring the vertices of a graph [Formula: see text] according to certain conditions can be considered as a random experiment and a discrete random variable [Formula: see text] can be defined as the number of vertices having a particular color in the proper coloring of [Formula: see text]. The concepts of mean and variance, two important statistical measures, have also been introduced to the theory of graph coloring and determined the values of these parameters for a number of standard graphs. In this paper, we discuss the coloring parameters of the Mycielskian of certain standard graphs.


2016 ◽  
Vol 08 (03) ◽  
pp. 1650052 ◽  
Author(s):  
N. K. Sudev ◽  
K. P. Chithra ◽  
S. Satheesh ◽  
Johan Kok

Coloring the vertices of a graph [Formula: see text] according to certain conditions can be considered as a random experiment and a discrete random variable (r.v.) [Formula: see text] can be defined as the number of vertices having a particular color in the proper coloring of [Formula: see text] and a probability mass function for this random variable can be defined accordingly. In this paper, we extend the concepts of mean and variance to a modified injective graph coloring and determine the values of these parameters for a number of standard graphs.


Sign in / Sign up

Export Citation Format

Share Document