scholarly journals A Path-Based Partial Information Decomposition

Entropy ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. 952
Author(s):  
David Sigtermans

Based on the conceptual basis of information theory, we propose a novel mutual information measure—‘path-based mutual information’. This information measure results from the representation of a set of random variables as a probabilistic graphical model. The edges in this graph are modeled as discrete memoryless communication channels, that is, the underlying data is ergodic, stationary, and the Markov condition is assumed to be applicable. The associated multilinear stochastic maps, tensors, transform source probability mass functions into destination probability mass functions. This allows for an exact expression of the resulting tensor of a cascade of discrete memoryless communication channels in terms of the tensors of the constituting communication channels in the paths. The resulting path-based information measure gives rise to intuitive, non-negative, and additive path-based information components—redundant, unique, and synergistic information—as proposed by Williams and Beer. The path-based redundancy satisfies the axioms postulated by Williams and Beer, the identity axiom postulated by Harder, and the left monotonicity axiom postulated Bertschinger. The ordering relations between redundancies of different joint collections of sources, as captured in the redundancy lattices of Williams and Beer, follow from the data processing inequality. Although negative information components can arise, we speculate that these either result from unobserved variables, or from adding additional sources that are statistically independent from all other sources to a system containing only non-negative information components. This path-based approach illustrates that information theory provides the concepts and measures for a partial information decomposition.

2021 ◽  
Author(s):  
CHU PAN

Using information measures to infer biological regulatory networks can observe nonlinear relationship between variables, but it is computationally challenging and there is currently no convenient tool available. We here describe an information theory R package named Informeasure that devotes to quantifying nonlinear dependence between variables in biological regulatory networks from an information theory perspective. This package compiles most of the information measures currently available: mutual information, conditional mutual information, interaction information, partial information decomposition and part mutual information. The first estimator is used to infer bivariate networks while the last four estimators are dedicated to analysis of trivariate networks. The base installation of this turn-key package allows users to approach these information measures out of the box. Informeasure is implemented in R program and is available as an R/Bioconductor package at https://bioconductor.org/packages/Informeasure.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1331
Author(s):  
Giancarlo Nicola ◽  
Paola Cerchiello ◽  
Tomaso Aste

In this work we investigate whether information theory measures like mutual information and transfer entropy, extracted from a bank network, Granger cause financial stress indexes like LIBOR-OIS (London Interbank Offered Rate-Overnight Index Swap) spread, STLFSI (St. Louis Fed Financial Stress Index) and USD/CHF (USA Dollar/Swiss Franc) exchange rate. The information theory measures are extracted from a Gaussian Graphical Model constructed from daily stock time series of the top 74 listed US banks. The graphical model is calculated with a recently developed algorithm (LoGo) which provides very fast inference model that allows us to update the graphical model each market day. We therefore can generate daily time series of mutual information and transfer entropy for each bank of the network. The Granger causality between the bank related measures and the financial stress indexes is investigated with both standard Granger-causality and Partial Granger-causality conditioned on control measures representative of the general economy conditions.


Entropy ◽  
2018 ◽  
Vol 20 (11) ◽  
pp. 826 ◽  
Author(s):  
Conor Finn ◽  
Joseph Lizier

Information is often described as a reduction of uncertainty associated with a restriction of possible choices. Despite appearing in Hartley’s foundational work on information theory, there is a surprising lack of a formal treatment of this interpretation in terms of exclusions. This paper addresses the gap by providing an explicit characterisation of information in terms of probability mass exclusions. It then demonstrates that different exclusions can yield the same amount of information and discusses the insight this provides about how information is shared amongst random variables—lack of progress in this area is a key barrier preventing us from understanding how information is distributed in complex systems. The paper closes by deriving a decomposition of the mutual information which can distinguish between differing exclusions; this provides surprising insight into the nature of directed information.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 702
Author(s):  
Velimir Ilić ◽  
Ivan Djordjević

The measures of information transfer which correspond to non-additive entropies have intensively been studied in previous decades. The majority of the work includes the ones belonging to the Sharma–Mittal entropy class, such as the Rényi, the Tsallis, the Landsberg–Vedral and the Gaussian entropies. All of the considerations follow the same approach, mimicking some of the various and mutually equivalent definitions of Shannon information measures, and the information transfer is quantified by an appropriately defined measure of mutual information, while the maximal information transfer is considered as a generalized channel capacity. However, all of the previous approaches fail to satisfy at least one of the ineluctable properties which a measure of (maximal) information transfer should satisfy, leading to counterintuitive conclusions and predicting nonphysical behavior even in the case of very simple communication channels. This paper fills the gap by proposing two parameter measures named the α-q-mutual information and the α-q-capacity. In addition to standard Shannon approaches, special cases of these measures include the α-mutual information and the α-capacity, which are well established in the information theory literature as measures of additive Rényi information transfer, while the cases of the Tsallis, the Landsberg–Vedral and the Gaussian entropies can also be accessed by special choices of the parameters α and q. It is shown that, unlike the previous definition, the α-q-mutual information and the α-q-capacity satisfy the set of properties, which are stated as axioms, by which they reduce to zero in the case of totally destructive channels and to the (maximal) input Sharma–Mittal entropy in the case of perfect transmission, which is consistent with the maximum likelihood detection error. In addition, they are non-negative and less than or equal to the input and the output Sharma–Mittal entropies, in general. Thus, unlike the previous approaches, the proposed (maximal) information transfer measures do not manifest nonphysical behaviors such as sub-capacitance or super-capacitance, which could qualify them as appropriate measures of the Sharma–Mittal information transfer.


2021 ◽  
Vol 15 (1) ◽  
pp. 408-433
Author(s):  
Margaux Dugardin ◽  
Werner Schindler ◽  
Sylvain Guilley

Abstract Extra-reductions occurring in Montgomery multiplications disclose side-channel information which can be exploited even in stringent contexts. In this article, we derive stochastic attacks to defeat Rivest-Shamir-Adleman (RSA) with Montgomery ladder regular exponentiation coupled with base blinding. Namely, we leverage on precharacterized multivariate probability mass functions of extra-reductions between pairs of (multiplication, square) in one iteration of the RSA algorithm and that of the next one(s) to build a maximum likelihood distinguisher. The efficiency of our attack (in terms of required traces) is more than double compared to the state-of-the-art. In addition to this result, we also apply our method to the case of regular exponentiation, base blinding, and modulus blinding. Quite surprisingly, modulus blinding does not make our attack impossible, and so even for large sizes of the modulus randomizing element. At the cost of larger sample sizes our attacks tolerate noisy measurements. Fortunately, effective countermeasures exist.


Author(s):  
Eahsan Shahriary ◽  
Amir Hajibabaee

This book offers the students and researchers a unique introduction to Bayesian statistics. Authors provide a wonderful journey in the realm of Bayesian Probability and aspire readers to become Bayesian statisticians. The book starts with Introduction to Probability and covers Bayes’ Theorem, Probability Mass Functions, Probability Density Functions, The Beta-Binomial Conjugate, Markov chain Monte Carlo (MCMC), and Metropolis-Hastings Algorithm. The book is very well written, and topics are very to the point with real-world applications but does not provide examples for computing using common open-source software.


This chapter presents a higher-order-logic formalization of the main concepts of information theory (Cover & Thomas, 1991), such as the Shannon entropy and mutual information, using the formalization of the foundational theories of measure, Lebesgue integration, and probability. The main results of the chapter include the formalizations of the Radon-Nikodym derivative and the Kullback-Leibler (KL) divergence (Coble, 2010). The latter provides a unified framework based on which most of the commonly used measures of information can be defined. The chapter then provides the general definitions that are valid for both discrete and continuous cases and then proves the corresponding reduced expressions where the measures considered are absolutely continuous over finite spaces.


2020 ◽  
Vol 43 (1) ◽  
pp. 21-48
Author(s):  
Josmar Mazucheli ◽  
Wesley Bertoli ◽  
Ricardo Puziol Oliveira

The methods to obtain discrete analogues of continuous distributions have been widely considered in recent years. In general, the discretization process provides probability mass functions that can be competitive with the traditional model used in the analysis of count data, the Poisson distribution. The discretization procedure also avoids the use of continuous distribution in the analysis of strictly discrete data. In this paper, we seek to introduce two discrete analogues for the Shanker distribution using the method of the infinite series and the method based on the survival function as alternatives to model overdispersed datasets. Despite the difference between discretization methods, the resulting distributions are interchangeable. However, the distribution generated by the method of infinite series method has simpler mathematical expressions for the shape, the generating functions and the central moments. The maximum likelihood theory is considered for estimation and asymptotic inference concerns. A simulation study is carried out in order to evaluate some frequentist properties of the developed methodology. The usefulness of the proposed models is evaluated using real datasets provided by the literature.


1978 ◽  
Vol 10 (04) ◽  
pp. 788-802
Author(s):  
Bruce Ebanks

It is shown that every measure of expected information which has the branching property is of the form where J is a given information measure which is compositive under a regular binary operation and the Ψ n are antisymmetric, bi-additive functions. In a probability space, such measures (entropies) take the form


Sign in / Sign up

Export Citation Format

Share Document