scholarly journals On a 2-Relative Entropy

Entropy ◽  
2021 ◽  
Vol 24 (1) ◽  
pp. 74
Author(s):  
James Fullwood

We construct a 2-categorical extension of the relative entropy functor of Baez and Fritz, and show that our construction is functorial with respect to vertical morphisms. Moreover, we show such a ‘2-relative entropy’ satisfies natural 2-categorial analogues of convex linearity, vanishing under optimal hypotheses, and lower semicontinuity. While relative entropy is a relative measure of information between probability distributions, we view our construction as a relative measure of information between channels.

Psihologija ◽  
2007 ◽  
Vol 40 (1) ◽  
pp. 5-35
Author(s):  
Aleksandar Kostic ◽  
Milena Bozic

In this study we investigate the constraints on probability distribution of grammatical forms within morphological paradigms of Serbian language, where paradigm is specified as a coherent set of elements with defined criteria for inclusion. Thus, for example, in Serbian all feminine nouns that end with the suffix "a" in their nominative singular form belong to the third declension, the declension being a paradigm. The notion of a paradigm could be extended to other criteria as well, hence, we can think of noun cases, irrespective of grammatical number and gender, or noun gender, irrespective of case and grammatical number, also as paradigms. We took the relative entropy as a measure of homogeneity of probability distribution within paradigms. The analysis was performed on 116 morphological paradigms of typical Serbian and for each paradigm the relative entropy has been calculated. The obtained results indicate that for most paradigms the relative entropy values fall within a range of 0.75 - 0.9. Nonhomogeneous distribution of relative entropy values allows for estimating the relative entropy of the morphological system as a whole. This value is 0.69 and can tentatively be taken as an index of stability of the morphological system.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


2009 ◽  
Vol 10 (3) ◽  
pp. 807-819 ◽  
Author(s):  
F. Pappenberger ◽  
A. Ghelli ◽  
R. Buizza ◽  
K. Bódis

Abstract A methodology for evaluating ensemble forecasts, taking into account observational uncertainties for catchment-based precipitation averages, is introduced. Probability distributions for mean catchment precipitation are derived with the Generalized Likelihood Uncertainty Estimation (GLUE) method. The observation uncertainty includes errors in the measurements, uncertainty as a result of the inhomogeneities in the rain gauge network, and representativeness errors introduced by the interpolation methods. The closeness of the forecast probability distribution to the observed fields is measured using the Brier skill score, rank histograms, relative entropy, and the ratio between the ensemble spread and the error of the ensemble-median forecast (spread–error ratio). Four different methods have been used to interpolate observations on the catchment regions. Results from a 43-day period (20 July–31 August 2002) show little sensitivity to the interpolation method used. The rank histograms and the relative entropy better show the effect of introducing observation uncertainty, although this effect on the Brier skill score and the spread–error ratio is not very large. The case study indicates that overall observation uncertainty should be taken into account when evaluating forecast skill.


2014 ◽  
Vol 15 (1) ◽  
pp. 102-116 ◽  
Author(s):  
Paul A. Dirmeyer ◽  
Jiangfeng Wei ◽  
Michael G. Bosilovich ◽  
David M. Mocko

Abstract A quasi-isentropic, back-trajectory scheme is applied to output from the Modern-Era Retrospective Analysis for Research and Applications (MERRA) and a land-only replay with corrected precipitation to estimate surface evaporative sources of moisture supplying precipitation over every ice-free land location for the period 1979–2005. The evaporative source patterns for any location and time period are effectively two-dimensional probability distributions. As such, the evaporative sources for extreme situations like droughts or wet intervals can be compared to the corresponding climatological distributions using the method of relative entropy. Significant differences are found to be common and widespread for droughts, but not wet periods, when monthly data are examined. At pentad temporal resolution, which is more able to isolate floods and situations of atmospheric rivers, values of relative entropy over North America are typically 50%–400% larger than at monthly time scales. Significant differences suggest that moisture transport may be a key factor in precipitation extremes. Where evaporative sources do not change significantly, it implies other local causes may underlie the extreme events.


Author(s):  
Mario Berta ◽  
Fernando G. S. L. Brandão ◽  
Christoph Hirche

AbstractWe extend quantum Stein’s lemma in asymmetric quantum hypothesis testing to composite null and alternative hypotheses. As our main result, we show that the asymptotic error exponent for testing convex combinations of quantum states $$\rho ^{\otimes n}$$ ρ ⊗ n against convex combinations of quantum states $$\sigma ^{\otimes n}$$ σ ⊗ n can be written as a regularized quantum relative entropy formula. We prove that in general such a regularization is needed but also discuss various settings where our formula as well as extensions thereof become single-letter. This includes an operational interpretation of the relative entropy of coherence in terms of hypothesis testing. For our proof, we start from the composite Stein’s lemma for classical probability distributions and lift the result to the non-commutative setting by using elementary properties of quantum entropy. Finally, our findings also imply an improved recoverability lower bound on the conditional quantum mutual information in terms of the regularized quantum relative entropy—featuring an explicit and universal recovery map.


2013 ◽  
Vol 756-759 ◽  
pp. 4068-4072 ◽  
Author(s):  
Min Chen ◽  
Fu Yan Wang

The context quantization forsource based on the modified K-means clustering algorithm is present in this paper. In this algorithm, the adaptive complementary relative entropy between two conditional probability distributions, which is used as the distance measure for K-means instead, is formulated to describe the similarity of these two probability distributions. The rules of the initialized centers chosen for K-means are also discussed. The proposed algorithm will traverse all possible number of the classes to search the optimal one which is corresponding to the shortest adaptive code length. Then the optimal context quantizer is achieved rapidly and the adaptive code length is minimized at the same time. Simulations indicate that the proposed algorithm produces better coding result than the result of other algorithm.


1997 ◽  
Vol 161 ◽  
pp. 197-201 ◽  
Author(s):  
Duncan Steel

AbstractWhilst lithopanspermia depends upon massive impacts occurring at a speed above some limit, the intact delivery of organic chemicals or other volatiles to a planet requires the impact speed to be below some other limit such that a significant fraction of that material escapes destruction. Thus the two opposite ends of the impact speed distributions are the regions of interest in the bioastronomical context, whereas much modelling work on impacts delivers, or makes use of, only the mean speed. Here the probability distributions of impact speeds upon Mars are calculated for (i) the orbital distribution of known asteroids; and (ii) the expected distribution of near-parabolic cometary orbits. It is found that cometary impacts are far more likely to eject rocks from Mars (over 99 percent of the cometary impacts are at speeds above 20 km/sec, but at most 5 percent of the asteroidal impacts); paradoxically, the objects impacting at speeds low enough to make organic/volatile survival possible (the asteroids) are those which are depleted in such species.


Sign in / Sign up

Export Citation Format

Share Document