scholarly journals Lagrangian Submanifolds of Symplectic Structures Induced by Divergence Functions

Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 983
Author(s):  
Marco Favretti

Divergence functions play a relevant role in Information Geometry as they allow for the introduction of a Riemannian metric and a dual connection structure on a finite dimensional manifold of probability distributions. They also allow to define, in a canonical way, a symplectic structure on the square of the above manifold of probability distributions, a property that has received less attention in the literature until recent contributions. In this paper, we hint at a possible application: we study Lagrangian submanifolds of this symplectic structure and show that they are useful for describing the manifold of solutions of the Maximum Entropy principle.

2014 ◽  
Vol 580-583 ◽  
pp. 9-16
Author(s):  
Shu Jun Zhang ◽  
Zhi Jun Xu ◽  
Luo Zhong ◽  
Zhao Ran Xiao

Due to various uncertainties, most of geotechnical parameters are small samples, which causes much trouble when the probability distribution of geotechnical parameter is fitted using traditional distributions. This paper uses stochastic weighted method to improve the small samples of Geotechnical parameters into big samples, thus solving the problems caused by the small samples. Meanwhile, the probability density function of geotechnical parameter is derived based on maximum entropy principle, the advantage of presented method is verified through Kolmogorov-Smirnov Test. Case study shows that the proposed method not only overcomes the dependence of conventional fitting methods on classical probability distributions, but also the fitting more close to the fact because the data come from the big sample improved by geotechnical parameters, which has important engineering significance.


2015 ◽  
Author(s):  
PierGianLuca Porta Mana ◽  
Emiliano Torre ◽  
Vahid Rostami

This note summarizes some mathematical relations between the probability distributions for the states of a network of binary units and a subnetwork thereof, under an assumption of symmetry. These relations are standard results of probability theory, but seem to be rarely used in neuroscience. Some of their consequences for inferences between network and subnetwork, especially in connection with the maximum-entropy principle, are briefly discussed. The meanings and applicability of the assumption of symmetry are also discussed.


2021 ◽  
Vol 3 (1) ◽  
pp. 12
Author(s):  
Ariel Caticha

The mathematical formalism of quantum mechanics is derived or “reconstructed” from more basic considerations of the probability theory and information geometry. The starting point is the recognition that probabilities are central to QM; the formalism of QM is derived as a particular kind of flow on a finite dimensional statistical manifold—a simplex. The cotangent bundle associated to the simplex has a natural symplectic structure and it inherits its own natural metric structure from the information geometry of the underlying simplex. We seek flows that preserve (in the sense of vanishing Lie derivatives) both the symplectic structure (a Hamilton flow) and the metric structure (a Killing flow). The result is a formalism in which the Fubini–Study metric, the linearity of the Schrödinger equation, the emergence of complex numbers, Hilbert spaces and the Born rule are derived rather than postulated.


1980 ◽  
Vol 102 (3) ◽  
pp. 460-468
Author(s):  
J. N. Siddall ◽  
Ali Badawy

A new algorithm using the maximum entropy principle is introduced to estimate the probability distribution of a random variable, using directly a ranked sample. It is demonstrated that almost all of the analytical probability distributions can be approximated by the new algorithm. A comparison is made between existing methods and the new algorithm; and examples are given of fitting the new distribution to an actual ranked sample.


Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 112 ◽  
Author(s):  
Jan Korbel ◽  
Rudolf Hanel ◽  
Stefan Thurner

In the world of generalized entropies—which, for example, play a role in physical systems with sub- and super-exponential phase space growth per degree of freedom—there are two ways for implementing constraints in the maximum entropy principle: linear and escort constraints. Both appear naturally in different contexts. Linear constraints appear, e.g., in physical systems, when additional information about the system is available through higher moments. Escort distributions appear naturally in the context of multifractals and information geometry. It was shown recently that there exists a fundamental duality that relates both approaches on the basis of the corresponding deformed logarithms (deformed-log duality). Here, we show that there exists another duality that arises in the context of information geometry, relating the Fisher information of ϕ -deformed exponential families that correspond to linear constraints (as studied by J.Naudts) to those that are based on escort constraints (as studied by S.-I. Amari). We explicitly demonstrate this information geometric duality for the case of ( c , d ) -entropy, which covers all situations that are compatible with the first three Shannon–Khinchin axioms and that include Shannon, Tsallis, Anteneodo–Plastino entropy, and many more as special cases. Finally, we discuss the relation between the deformed-log duality and the information geometric duality and mention that the escort distributions arising in these two dualities are generally different and only coincide for the case of the Tsallis deformation.


Sign in / Sign up

Export Citation Format

Share Document