scholarly journals A Note on Ordering Probability Distributions by Skewness

Symmetry ◽  
2018 ◽  
Vol 10 (7) ◽  
pp. 286
Author(s):  
V. García ◽  
M. Martel-Escobar ◽  
F. Vázquez-Polo

This paper describes a complementary tool for fitting probabilistic distributions in data analysis. First, we examine the well known bivariate index of skewness and the aggregate skewness function, and then introduce orderings of the skewness of probability distributions. Using an example, we highlight the advantages of this approach and then present results for these orderings in common uniparametric families of continuous distributions, showing that the orderings are well suited to the intuitive conception of skewness and, moreover, that the skewness can be controlled via the parameter values.

2020 ◽  
Vol 1 (4) ◽  
pp. 229-238
Author(s):  
Devi Munandar ◽  
Sudradjat Supian ◽  
Subiyanto Subiyanto

The influence of social media in disseminating information, especially during the COVID-19 pandemic, can be observed with time interval, so that the probability of number of tweets discussed by netizens on social media can be observed. The nonhomogeneous Poisson process (NHPP) is a Poisson process dependent on time parameters and the exponential distribution having unequal parameter values and, independently of each other. The probability of no occurrence an event in the initial state is one and the probability of an event in initial state is zero. Using of non-homogeneous Poisson in this paper aims to predict and count the number of tweet posts with the keyword coronavirus, COVID-19 with set time intervals every day. Posting of tweets from one time each day to the next do not affect each other and the number of tweets is not the same. The dataset used in this study is crawling of COVID-19 tweets three times a day with duration of 20 minutes each crawled for 13 days or 39 time intervals. The result of this study obtained predictions and calculated for the probability of the number of tweets for the tendency of netizens to post on the situation of the COVID-19 pandemic.


2019 ◽  
Vol 3 ◽  
Author(s):  
Charlotte Olivia Brand ◽  
James Patrick Ounsley ◽  
Daniel Job Van der Post ◽  
Thomas Joshua Henry Morgan

This paper introduces a statistical technique known as “posterior passing” in which the results of past studies can be used to inform the analyses carried out by subsequent studies. We first describe the technique in detail and show how it can be implemented by individual researchers on an experiment by experiment basis. We then use a simulation to explore its success in identifying true parameter values compared to current statistical norms (ANOVAs and GLMMs). We find that posterior passing allows the true effect in the population to be found with greater accuracy and consistency than the other analysis types considered. Furthermore, posterior passing performs almost identically to a data analysis in which all data from all simulated studies are combined and analysed as one dataset. On this basis, we suggest that posterior passing is a viable means of implementing cumulative science. Furthermore, because it prevents the accumulation of large bodies of conflicting literature, it alleviates the need for traditional meta-analyses. Instead, posterior passing cumulatively and collaboratively provides clarity in real time as each new study is produced and is thus a strong candidate for a new, cumulative approach to scientific analyses and publishing.


2018 ◽  
Author(s):  
Daniel Mortlock

Mathematics is the language of quantitative science, and probability and statistics are the extension of classical logic to real world data analysis and experimental design. The basics of mathematical functions and probability theory are summarized here, providing the tools for statistical modeling and assessment of experimental results. There is a focus on the Bayesian approach to such problems (ie, Bayesian data analysis); therefore, the basic laws of probability are stated, along with several standard probability distributions (eg, binomial, Poisson, Gaussian). A number of standard classical tests (eg, p values, the t-test) are also defined and, to the degree possible, linked to the underlying principles of probability theory. This review contains 5 figures, 1 table, and 15 references. Keywords: Bayesian data analysis, mathematical models, power analysis, probability, p values, statistical tests, statistics, survey design


1984 ◽  
Vol 21 (04) ◽  
pp. 924-929 ◽  
Author(s):  
Raymond J. Hickey

Majorisation is used to compare continuous distributions in terms of randomness. General results on randomness in the continuous case are given and these are used to investigate the connection between randomness and parameter values in some well-known families of distributions including the normal and gamma.


This chapter delivers general format of higher order neural networks (HONNs) for nonlinear data analysis and six different HONN models. Then, this chapter mathematically proves that HONN models could converge and have mean squared errors close to zero. Moreover, this chapter illustrates the learning algorithm with update formulas. HONN models are compared with SAS nonlinear (NLIN) models, and results show that HONN models are 3 to 12% better than SAS nonlinear models. Finally, this chapter shows how to use HONN models to find the best model, order, and coefficients without writing the regression expression, declaring parameter names, and supplying initial parameter values.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 14 ◽  
Author(s):  
Martino Trassinelli

We present here Nested_fit, a Bayesian data analysis code developed for investigations of atomic spectra and other physical data. It is based on the nested sampling algorithm with the implementation of an upgraded lawn mower robot method for finding new live points. For a given data set and a chosen model, the program provides the Bayesian evidence, for the comparison of different hypotheses/models, and the different parameter probability distributions. A large database of spectral profiles is already available (Gaussian, Lorentz, Voigt, Log-normal, etc.) and additional ones can easily added. It is written in Fortran, for an optimized parallel computation, and it is accompanied by a Python library for the results visualization.


1984 ◽  
Vol 21 (4) ◽  
pp. 924-929 ◽  
Author(s):  
Raymond J. Hickey

Majorisation is used to compare continuous distributions in terms of randomness. General results on randomness in the continuous case are given and these are used to investigate the connection between randomness and parameter values in some well-known families of distributions including the normal and gamma.


1976 ◽  
Vol 33 (4) ◽  
pp. 793-809 ◽  
Author(s):  
C. C. Huang ◽  
Ilan B. Vertinsky ◽  
Norman J. Wilimovsky

Mathematical proofs and analyses of solution methods are presented for determining optimal policies for the management of a single species fishery under equilibrium conditions. Previous intuitive arguments for solution of optimal policies controlling mesh size and fishing rate given complete information are explicitly proven. The analysis is extended to the case where some of the parameters describing the dynamics of the population are known only imprecisely to the manager. Using probability distributions for those unknown parameter values the problem is cast as a stochastic program where expected sustained net revenues from the fishery are maximized. The associated problem of optimal allocation of research resources under uncertainty conditions is considered by evaluating the direct value of such information to management activities.Examples and algorithms are presented for the class of problems discussed.


2018 ◽  
Vol 612 ◽  
pp. L3 ◽  
Author(s):  
Michael R. Meyer ◽  
Adam Amara ◽  
Maddalena Reggiani ◽  
Sascha P. Quanz

Aims. We fit a log-normal function to the M-dwarf orbital surface density distribution of gas giant planets, over the mass range 1–10 times that of Jupiter, from 0.07 to 400 AU. Methods. We used a Markov chain Monte Carlo approach to explore the likelihoods of various parameter values consistent with point estimates of the data given our assumed functional form. Results. This fit is consistent with radial velocity, microlensing, and direct-imaging observations, is well-motivated from theoretical and phenomenological points of view, and predicts results of future surveys. We present probability distributions for each parameter and a maximum likelihood estimate solution. Conclusions. We suggest that this function makes more physical sense than other widely used functions, and we explore the implications of our results on the design of future exoplanet surveys.


2020 ◽  
Author(s):  
Liang Yuan

<p>    In situ measurements are performed to study the size-resolved hygroscopic behaviour of submicron aerosols during pollution and fireworks episodes in winter from late January to February 2019 in Chengdu, a megacity in Sichuan Basin, using a humidity tandem differential mobility analyser (H-TDMA). The H-TDMA is operated at a relative humidity of 90% with dry aerosol diameters between 40 and 200 nm. Three modes of aerosol particles, including nearly hydrophobic mode (NH), less hygroscopic mode (LH), and more hygroscopic mode (MH), are found in the probability distributions of the growth factor (GF-PDF) during the campaign. The GF-PDF shows that aerosol particles are usually externally mixed. The average ensemble mean hygroscopicity parameter values (<em>κ</em><sub>Mean</sub>) over the entire sampling period are 0.16, 0.19, 0.21, 0.23, and 0.26 for aerosols with diameters of 40, 80, 110, 150, and 200 nm, respectively. These averages are lower than those in Shanghai and Nanjing. <em>κ</em><sub>Mean</sub> for aerosols larger than 110 nm, however, are higher than those in Beijing and Guangzhou during winter. Distinct diurnal patterns for all measured sizes are observed for the number fractions of the NH (<em>NF</em><sub>NH</sub>) and MH (<em>NF</em><sub>MH</sub>) modes as well as <em>κ</em>-PDF and <em>κ</em><sub>Mean</sub>. The <em>NF</em><sub>NH</sub> values are lower, but <em>κ</em><sub>Mean</sub> exhibits peak values during daytime. More aerosols are internally mixed because of photochemical ageing during daytime. The number fraction of LH (<em>NF</em><sub>LH</sub>) for the 40-nm diameter aerosols in clean periods (CPs) is larger than that in the pollution episode (PEs) because of the increasing amount of SOA formation. More aerosols of diameters larger than 80 nm are internally mixed during CPs and stage of contaminant accumulation, resulting in higher <em>κ</em><sub>Mean</sub> values compared to those in PEs. The aerosol emissions of fireworks that accumulate during the Chinese New Year's Eve contribute to the slow and continuous increasing trend in<em> κ</em><sub>Mean</sub> with average values of 0.19, 0.19, 0.21,0.23, and 0.27 for the 40, 80, 110, 150, and 200-nm diameter aerosols, respectively. These values are higher than those during the pre- and post-fireworks days. The hygroscopic properties of submicron aerosols in Chengdu are essential for understanding the formation and evolution of severe haze events in Sichuan Basin.</p>


Sign in / Sign up

Export Citation Format

Share Document