scholarly journals Calibration Invariance of the MaxEnt Distribution in the Maximum Entropy Principle

Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 96
Author(s):  
Jan Korbel

The maximum entropy principle consists of two steps: The first step is to find the distribution which maximizes entropy under given constraints. The second step is to calculate the corresponding thermodynamic quantities. The second part is determined by Lagrange multipliers’ relation to the measurable physical quantities as temperature or Helmholtz free energy/free entropy. We show that for a given MaxEnt distribution, the whole class of entropies and constraints leads to the same distribution but generally different thermodynamics. Two simple classes of transformations that preserve the MaxEnt distributions are studied: The first case is a transform of the entropy to an arbitrary increasing function of that entropy. The second case is the transform of the energetic constraint to a combination of the normalization and energetic constraints. We derive group transformations of the Lagrange multipliers corresponding to these transformations and determine their connections to thermodynamic quantities. For each case, we provide a simple example of this transformation.

2018 ◽  
Vol 64 (6) ◽  
pp. 603
Author(s):  
Angelo Plastino

We study some peculiarities of the classical variational treatment that applies Jaynes’ maximum entropy principle. The associated variational treatment is usually called MaxEnt. We deal with it in connection with thermodynamics’ reciprocity relations. Two points of view are adopted: (A) One of them is purely abstract, concerned solely with ascertaining compliance of the variational solutions with the reciprocity relations in which one does not need here to have explicit values for the Lagrange multipliers. The other, (B) is a straightforward variation process in which one explicitly obtains the specific values of these multipliers. We focus on the so called q-entropy because it illustratesa situation in which the above two approaches yield different results. We detect an information loss in extracting the explicit form of the normalization-associated Lagrange multipliers.


2010 ◽  
Vol 20 (02) ◽  
pp. 281-285
Author(s):  
CLAUDIA M. SARRIS ◽  
ARACELI N. PROTO

Within the Maximum Entropy Principle context, the specific heat of the system can be expressed in terms of the extensive variables (mean values) or the intensive variables (Lagrange multipliers). It can be shown that the specific heat of the system represented by a given Hamiltonian is not only a thermodynamical quantity but also a dynamical concept. In this contribution, we analyze the consequences emerging from the dynamical properties of the specific heat whose value can be varied using initial conditions independently of the temperature value for the case of semiquantum nonlinear Hamiltonians. An example is outlined.


1990 ◽  
Vol 27 (2) ◽  
pp. 303-313 ◽  
Author(s):  
Claudine Robert

The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.


Author(s):  
KAI YAO ◽  
JINWU GAO ◽  
WEI DAI

Entropy is a measure of the uncertainty associated with a variable whose value cannot be exactly predicated. In uncertainty theory, it has been quantified so far by logarithmic entropy. However, logarithmic entropy sometimes fails to measure the uncertainty. This paper will propose another type of entropy named sine entropy as a supplement, and explore its properties. After that, the maximum entropy principle will be introduced, and the arc-cosine distributed variables will be proved to have the maximum sine entropy with given expected value and variance.


Sign in / Sign up

Export Citation Format

Share Document