Lancaster bivariate probability distributions with Poisson, negative binomial and gamma margins

Test ◽  
1998 ◽  
Vol 7 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Angelo Efoévi Koudou
2010 ◽  
Vol 68 (1) ◽  
pp. 130-143 ◽  
Author(s):  
Philip J. Bacon ◽  
William S. C. Gurney ◽  
Eddie McKenzie ◽  
Bryce Whyte ◽  
Ronald Campbell ◽  
...  

Abstract Bacon, P. J., Gurney, W. S. C., McKenzie, E., Whyte, B., Campbell, R., Laughton, R., Smith, G., and MacLean, J. 2011. Objective determination of the sea age of Atlantic salmon from the sizes and dates of capture of individual fish. – ICES Journal of Marine Science, 68: 130–143. The sea ages of Atlantic salmon indicate crucial differences between oceanic feeding zones that have important implications for conservation and management. Historical fishery-catch records go back more than 100 years, but the reliability with which they discriminate between sea-age classes is uncertain. Research data from some 188 000 scale-aged Scottish salmon that included size (length, weight) and seasonal date of capture on return to the coast were investigated to devise a means of assigning sea age to individual fish objectively. Two simple bivariate probability distributions are described that discriminate between 1SW and 2SW fish with 97% reliability, and between 2SW and 3SW fish with 70% confidence. The same two probability distributions achieve this accuracy across five major east coast Scottish rivers and five decades. They also achieve the same exactitude for a smaller recent dataset from the Scottish west coast, from the River Tweed a century ago (1894/1895), and for salmon caught by rod near the estuary. More surprisingly, they also achieve the same success for rod-caught salmon taken at beats remote from the estuary and including capture dates when some fish could have been in the river for a few months. The implications of these findings for fishery management and conservation are discussed.


1995 ◽  
Vol 1 (2) ◽  
pp. 163-190 ◽  
Author(s):  
Kenneth W. Church ◽  
William A. Gale

AbstractShannon (1948) showed that a wide range of practical problems can be reduced to the problem of estimating probability distributions of words and ngrams in text. It has become standard practice in text compression, speech recognition, information retrieval and many other applications of Shannon's theory to introduce a “bag-of-words” assumption. But obviously, word rates vary from genre to genre, author to author, topic to topic, document to document, section to section, and paragraph to paragraph. The proposed Poisson mixture captures much of this heterogeneous structure by allowing the Poisson parameter θ to vary over documents subject to a density function φ. φ is intended to capture dependencies on hidden variables such genre, author, topic, etc. (The Negative Binomial is a well-known special case where φ is a Г distribution.) Poisson mixtures fit the data better than standard Poissons, producing more accurate estimates of the variance over documents (σ2), entropy (H), inverse document frequency (IDF), and adaptation (Pr(x ≥ 2/x ≥ 1)).


2019 ◽  
Vol 3 ◽  
pp. 11-20
Author(s):  
Binod Kumar Sah ◽  
A. Mishra

Background: The exponential and the Lindley (1958) distributions occupy central places among the class of continuous probability distributions and play important roles in statistical theory. A Generalised Exponential-Lindley Distribution (GELD) was given by Mishra and Sah (2015) of which, both the exponential and the Lindley distributions are the particular cases. Mixtures of distributions form an important class of distributions in the domain of probability distributions. A mixture distribution arises when some or all the parameters in a probability function vary according to certain probability law. In this paper, a Generalised Exponential- Lindley Mixture of Poisson Distribution (GELMPD) has been obtained by mixing Poisson distribution with the GELD. Materials and Methods: It is based on the concept of the generalisations of some continuous mixtures of Poisson distribution. Results: The Probability mass of function of generalized exponential-Lindley mixture of Poisson distribution has been obtained by mixing Poisson distribution with GELD. The first four moments about origin of this distribution have been obtained. The estimation of its parameters has been discussed using method of moments and also as maximum likelihood method. This distribution has been fitted to a number of discrete data-sets which are negative binomial in nature and it has been observed that the distribution gives a better fit than the Poisson–Lindley Distribution (PLD) of Sankaran (1970). Conclusion: P-value of the GELMPD is found greater than that in case of PLD. Hence, it is expected to be a better alternative to the PLD of Sankaran for similar type of discrete data-set which is negative binomial in nature.


2019 ◽  
Author(s):  
Lisa Amrhein ◽  
Kumar Harsha ◽  
Christiane Fuchs

SummarySeveral tools analyze the outcome of single-cell RNA-seq experiments, and they often assume a probability distribution for the observed sequencing counts. It is an open question of which is the most appropriate discrete distribution, not only in terms of model estimation, but also regarding interpretability, complexity and biological plausibility of inherent assumptions. To address the question of interpretability, we investigate mechanistic transcription and degradation models underlying commonly used discrete probability distributions. Known bottom-up approaches infer steady-state probability distributions such as Poisson or Poisson-beta distributions from different underlying transcription-degradation models. By turning this procedure upside down, we show how to infer a corresponding biological model from a given probability distribution, here the negative binomial distribution. Realistic mechanistic models underlying this distributional assumption are unknown so far. Our results indicate that the negative binomial distribution arises as steady-state distribution from a mechanistic model that produces mRNA molecules in bursts. We empirically show that it provides a convenient trade-off between computational complexity and biological simplicity.Graphical Abstract


1986 ◽  
Vol 18 (03) ◽  
pp. 660-678 ◽  
Author(s):  
C. Radhakrishna Rao ◽  
D. N. Shanbhag

The problem of identifying solutions of general convolution equations relative to a group has been studied in two classical papers by Choquet and Deny (1960) and Deny (1961). Recently, Lau and Rao (1982) have considered the analogous problem relative to a certain semigroup of the real line, which extends the results of Marsaglia and Tubilla (1975) and a lemma of Shanbhag (1977). The extended versions of Deny&s theorem contained in the papers by Lau and Rao, and Shanbhag (which we refer to as LRS theorems) yield as special cases improved versions of several characterizations of exponential, Weibull, stable, Pareto, geometric, Poisson and negative binomial distributions obtained by various authors during the last few years. In this paper we review some of the recent contributions to characterization of probability distributions (whose authors do not seem to be aware of LRS theorems or special cases existing earlier) and show how improved versions of these results follow as immediate corollaries to LRS theorems. We also give a short proof of Lau–Rao theorem based on Deny&s theorem and thus establish a direct link between the results of Deny (1961) and those of Lau and Rao (1982). A variant of Lau–Rao theorem is proved and applied to some characterization problems.


Author(s):  
Donald L. J. Quicke ◽  
Buntika A. Butcher ◽  
Rachel A. Kruft Welton

Abstract There are a number of in-built probability distributions, including uniform, binomial, negative binomial, normal, log-normal, logistic, exponential, Chisquared, Poisson, gamma, Fisher's F, Student's t, Weibull and others. These are used to generate p-values from test statistics, to generate random values from a distribution or to generate expected distributions. This chapter deals with standard distributions in R (a programming language that has a huge range of inbuilt statistical and graphical functions), focusing on the normal, Student's t, lognormal, logistic, Poisson, gamma, and the Chi-squared.


2000 ◽  
Vol 62 (2) ◽  
pp. 211-220 ◽  
Author(s):  
Jesús de la Cal ◽  
Ana M. Valle

We consider tensor product operators and discuss their best constants in preservation inequalities concerning the usual moduli of continuity. In a previous paper, we obtained lower and upper bounds on such constants, under fairly general assumptions on the operators. Here, we concentrate on the l∞-modulus of continuity and three celebrated families of operators. For the tensor product of k identical copies of the Bernstein operator Bn, we show that the best uniform constant coincides with the dimension k when k ≥ 3, while, in case k = 2, it lies in the interval [2, 5/2] but depends upon n. Similar results also hold when Bn is replaced by a univariate Szász or Baskakov operator. The three proofs follow the same pattern, a crucial ingredient being some special properties of the probability distributions involved in the mentioned operators, namely: the binomial, Poisson, and negative binomial distributions.


Sign in / Sign up

Export Citation Format

Share Document