scholarly journals On Representations of Divergence Measures and Related Quantities in Exponential Families

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 726
Author(s):  
Stefan Bedbur ◽  
Udo Kamps

Within exponential families, which may consist of multi-parameter and multivariate distributions, a variety of divergence measures, such as the Kullback–Leibler divergence, the Cressie–Read divergence, the Rényi divergence, and the Hellinger metric, can be explicitly expressed in terms of the respective cumulant function and mean value function. Moreover, the same applies to related entropy and affinity measures. We compile representations scattered in the literature and present a unified approach to the derivation in exponential families. As a statistical application, we highlight their use in the construction of confidence regions in a multi-sample setup.

1973 ◽  
Vol 9 (22) ◽  
pp. 528
Author(s):  
E. Ball
Keyword(s):  

1970 ◽  
Vol 7 (3) ◽  
pp. 300-306 ◽  
Author(s):  
David A. Aaker

This article explores the use of a brand choice stochastic model's mean value function in evaluating two models empirically, using a common set of purchase data. The linear learning model fit the data well, but its mean value function was not capable of making reasonable predictions of successive, aggregate purchasing statistics. Another brand choice model, the new trier model, was found to perform much better. The results suggest that model tests should not be restricted to the usual goodness-of-fit test, especially in situations of non-stationarity. A structural comparison of the two models focuses on their different approaches to nonstationarity.


2015 ◽  
Vol 764-765 ◽  
pp. 979-982
Author(s):  
Jung Hua Lo

Many software reliability growth models (SRGMs) have been developed to estimate some useful measures such as the mean value function, number of remaining faults, and failure detection rate. Most of these models have focused on the failure detection process and not given equal priority to modeling the fault correction process. But, most latent software errors may remain uncorrected for a long time even after they are detected, which increases their impact. The remaining software faults are often one of the most unreliable reasons for software quality. Therefore, we develop a general framework of the modeling of the failure detection and fault correction processes. Furthermore, it is assumed that a detected fault is immediately removed and is perfectly repaired with no new faults being introduced for the traditional SRGMs. In reality, it is impossible to remove all faults from the fault correction process and have a fault-free effect on the software development environment. In order to relax this perfect debugging assumption, we introduce the possibility of imperfect debugging phenomenon. Finally, numerical examples are shown to illustrate the results of the unified approach for integration of the detection and correction process under imperfect debugging.


2013 ◽  
Vol 11 (1) ◽  
pp. 2161-2168
Author(s):  
Sridevi Gutta ◽  
Satya R Prasad

The Reliability of the Software Process can be monitored efficiently using Statistical Process Control (SPC). SPC is the application of statistical techniques to control a process. SPC is a study of the best ways of describing and analyzing the data and then drawing conclusion or inferences based on available data. With the help of SPC the software development team can identify software failure process and find out actions to be taken which assures better software reliability. This paper provides a control mechanism based on the cumulative observations of Interval domain data using mean value function of Pareto type IV distribution, which is based on Non-Homogenous Poisson Process (NHPP). The unknown parameters of the model are estimated using maximum likelihood estimation approach. Besides it also presents an analysis of failure data sets at a particular point and compares Pareto Type II and Pareto Type IV models.


2017 ◽  
Vol 63 (4) ◽  
pp. 678-688 ◽  
Author(s):  
A B Muravnik

In the half-plane {−∞<x<+∞}×{0<y<+∞}, the Dirichlet problem is considered for m differential-difference equations of the kind uxx+∑mk=1akuxx(x+hk,y)+uyy=0, where the amount of nonlocal terms of the equation is arbitrary and no commensurability conditions are imposed on their coefficients a1,..., am and the parameters h1,..., hm determining the translations of the independent variable x. The only condition imposed on the coefficients and parameters of the studied equation is the nonpositivity of the real part of the symbol of the operator acting with respect to the variable x. Earlier, it was proved that the specified condition (i. e., the strong ellipticity condition for the corresponding differential-difference operator) guarantees the solvability of the considered problem in the sense of generalized functions (according to the Gel’fand-Shilov definition), a Poisson integral representation of a solution was constructed, and it was proved that the constructed solution is smooth outside the boundary line. In the present paper, the behavior of the specified solution as y → +∞ is investigated. We prove the asymptotic closedness between the investigated solution and the classical Dirichlet problem for the differential elliptic equation (with the same boundary-value function as in the original nonlocal problem) determined as follows: all parameters h1,..., hm of the original differential-difference elliptic equation are assigned to be equal to zero. As a corollary, we prove that the investigated solutions obey the classical Repnikov-Eidel’man stabilization condition: the solution stabilizes as y → +∞ if and only if the mean value of the boundary-value function over the interval (-R, +R) has a limit as R → +∞.


Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 485 ◽  
Author(s):  
Frank Nielsen

The Jensen–Shannon divergence is a renowned bounded symmetrization of the unbounded Kullback–Leibler divergence which measures the total Kullback–Leibler divergence to the average mixture distribution. However, the Jensen–Shannon divergence between Gaussian distributions is not available in closed form. To bypass this problem, we present a generalization of the Jensen–Shannon (JS) divergence using abstract means which yields closed-form expressions when the mean is chosen according to the parametric family of distributions. More generally, we define the JS-symmetrizations of any distance using parameter mixtures derived from abstract means. In particular, we first show that the geometric mean is well-suited for exponential families, and report two closed-form formula for (i) the geometric Jensen–Shannon divergence between probability densities of the same exponential family; and (ii) the geometric JS-symmetrization of the reverse Kullback–Leibler divergence between probability densities of the same exponential family. As a second illustrating example, we show that the harmonic mean is well-suited for the scale Cauchy distributions, and report a closed-form formula for the harmonic Jensen–Shannon divergence between scale Cauchy distributions. Applications to clustering with respect to these novel Jensen–Shannon divergences are touched upon.


Sign in / Sign up

Export Citation Format

Share Document