scholarly journals Inferring network properties from time series using transfer entropy and mutual information: Validation of multivariate versus bivariate approaches

2020 ◽  
pp. 1-32
Author(s):  
Leonardo Novelli ◽  
Joseph T. Lizier

Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.

Circulation ◽  
2015 ◽  
Vol 132 (suppl_3) ◽  
Author(s):  
Hiroshi Ashikaga ◽  
Jonathan Chrispin ◽  
Degang Wu ◽  
Joshua Garland

Recent evidence suggests that pulmonary vein isolation (PVI) may perturb the electrophysiological substrate for maintenance of atrial fibrillation (AF). Our previous work indicates that information theory metrics can quantify electrical communications during arrhythmia. We hypothesized that PVI ‘rewires’ the electrical communication network during AF such that the topology exhibits higher levels of small-world network properties, with higher clustering coefficient and lower path length, than would be expected by chance. Thirteen consecutive patients (n=6 with prior PVI and n=7 without) underwent AF ablation using a 64-electrode basket catheter in the left atrium. Multielectrode recording was performed during AF for 60 seconds, followed by PVI. Mutual information was calculated from the time series between each pair of electrodes using the Kraskov-Stögbauer-Grassberger estimator. The all-to-all mutual information matrix (64x64; Figure, upper panels) was thresholded by the median and standard deviations of mutual information to build a binary adjacency matrix for electrical communication networks. The properties of small-world network ( swn ; ‘small-world-ness’) were quantified by the ratio of the observed average clustering coefficient to that of a random network over the ratio of the observed average path length to that of a random network. swn was expressed in normal Z standard deviation units. As the binarizing threshold increased, the Z-score of swn decreased (Figure, lower panel). However, the Z-score at each threshold value was consistently higher with prior PVI than those without (p<0.05). In conclusion, electrical communication network during AF with prior PVI is associated with higher levels of small-world network properties than those without. This finding supports the concept that PVI perturbs the underlying substrate. In addition, swn of electrical communication network may be a promising metric to quantify substrate modification.


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1331
Author(s):  
Giancarlo Nicola ◽  
Paola Cerchiello ◽  
Tomaso Aste

In this work we investigate whether information theory measures like mutual information and transfer entropy, extracted from a bank network, Granger cause financial stress indexes like LIBOR-OIS (London Interbank Offered Rate-Overnight Index Swap) spread, STLFSI (St. Louis Fed Financial Stress Index) and USD/CHF (USA Dollar/Swiss Franc) exchange rate. The information theory measures are extracted from a Gaussian Graphical Model constructed from daily stock time series of the top 74 listed US banks. The graphical model is calculated with a recently developed algorithm (LoGo) which provides very fast inference model that allows us to update the graphical model each market day. We therefore can generate daily time series of mutual information and transfer entropy for each bank of the network. The Granger causality between the bank related measures and the financial stress indexes is investigated with both standard Granger-causality and Partial Granger-causality conditioned on control measures representative of the general economy conditions.


2008 ◽  
Vol 33 (4) ◽  
pp. 27-46 ◽  
Author(s):  
Y V Reddy ◽  
A Sebastin

Interactions between the foreign exchange market and the stock market of a country are considered to be an important internal force of the markets in a financially liberalized environment. If causal relationship from a market to the other is not detected, then informational efficiency exists in the other whereas existence of causality implies that hedging of exposure to one market by taking position in the other market will be effective. The temporal relationship between the forex market and the stock market of developing and developed countries has been studied, especially after the East Asian financial crisis of 1997–98, using various methods like cross-correlation, cross-spectrum, and error correction model, but these methods identify only linear relations. A statistically rigorous approach to the detection of interdependence, including non-linear dynamic relationships, between time series is provided by tools defined using the information theoretic concept of entropy. Entropy is the amount of disorder in the system and also is the amount of information needed to predict the next measurement with a certain precision. The mutual information between two random variables X and Y with a joint probability mass function p(x,y) and marginal mass functions p(x) and p(y), is defined as the relative entropy between the joint distribution p(x,y) and the product distribution p(x)*p(y). Mutual information is the reduction in the uncertainty of X due to the knowledge of Y and vice versa. Since mutual information measures the deviation from independence of the variables, it has been proposed as a tool to measure the relationship between financial market segments. However, mutual information is a symmetric measure and does not contain either dynamic information or directional sense. Even time delayed mutual information does not distinguish information actually exchanged from shared information due to a common input signal or history and therefore does not quantify the actual overlap of the information content of two variables. Another information theoretic measure called transfer entropy has been introduced by Thomas Schreiber (2000) to study the relationship between dynamic systems; the concept has also been applied by some authors to study the causal structure between financial time series. In this paper, an attempt has been made to study the interaction between the stock and the forex markets in India by computing transfer entropy between daily data series of the 50 stock index of the National Stock Exchange of India Limited, viz., Nifty and the exchange rate of Indian Rupee vis- à- vis US Dollar, viz., Reserve Bank of India reference rate. The entire period–November 1995 to March 2007–selected for the study, has been divided into three sub-periods for the purpose of analysis, considering the developments that took place during these sub-periods. The results obtained reveal that: there exist only low level interactions between the stock and the forex markets of India at a time scale of a day or less, although theory suggests interactive relationship between the two markets the flow from the stock market to the forex market is more pronounced than the flow in the reverse direction.


2019 ◽  
Author(s):  
Dhurata Nebiu ◽  
Hiqmet Kamberaj

AbstractSymbolic Information Flow Measurement software is used to compute the information flow between different components of a dynamical system or different dynamical systems using symbolic transfer entropy. Here, the time series represents the time evolution trajectory of a component of the dynamical system. Different methods are used to perform a symbolic analysis of the time series based on the coarse-graining approach by computing the so-called embedding parameters. Information flow is measured in terms of the so-called average symbolic transfer entropy and local symbolic transfer entropy. Besides, a new measure of mutual information is introduced based on the symbolic analysis, called symbolic mutual information.


2015 ◽  
Vol 27 (2) ◽  
pp. 329-364 ◽  
Author(s):  
Aurélie Garnier ◽  
Alexandre Vidal ◽  
Clément Huneau ◽  
Habib Benali

Neural mass modeling is a part of computational neuroscience that was developed to study the general behavior of a neuronal population. This type of mesoscopic model is able to generate output signals that are comparable to experimental data, such as electroencephalograms. Classically, neural mass models consider two interconnected populations: excitatory pyramidal cells and inhibitory interneurons. However, many authors have included an excitatory feedback on the pyramidal cell population. Two distinct approaches have been developed: a direct feedback on the main pyramidal cell population and an indirect feedback via a secondary pyramidal cell population. In this letter, we propose a new neural mass model that couples these two approaches. We perform a detailed bifurcation analysis and present a glossary of dynamical behaviors and associated time series. Our study reveals that the model is able to generate particular realistic time series that were never pointed out in either simulated or experimental data. Finally, we aim to evaluate the effect of balance between both excitatory feedbacks on the dynamical behavior of the model. For this purpose, we compute the codimension 2 bifurcation diagrams of the system to establish a map of the repartition of dynamical behaviors in a direct versus indirect feedback parameter space. A perspective of this work is, from a given temporal series, to estimate the parameter value range, especially in terms of direct versus indirect excitatory feedback.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 208
Author(s):  
Christos Koutlis ◽  
Dimitris Kugiumtzis

Many methods of Granger causality, or broadly termed connectivity, have been developed to assess the causal relationships between the system variables based only on the information extracted from the time series. The power of these methods to capture the true underlying connectivity structure has been assessed using simulated dynamical systems where the ground truth is known. Here, we consider the presence of an unobserved variable that acts as a hidden source for the observed high-dimensional dynamical system and study the effect of the hidden source on the estimation of the connectivity structure. In particular, the focus is on estimating the direct causality effects in high-dimensional time series (not including the hidden source) of relatively short length. We examine the performance of a linear and a nonlinear connectivity measure using dimension reduction and compare them to a linear measure designed for latent variables. For the simulations, four systems are considered, the coupled Hénon maps system, the coupled Mackey–Glass system, the neural mass model and the vector autoregressive (VAR) process, each comprising 25 subsystems (variables for VAR) at close chain coupling structure and another subsystem (variable for VAR) driving all others acting as the hidden source. The results show that the direct causality measures estimate, in general terms, correctly the existing connectivity in the absence of the source when its driving is zero or weak, yet fail to detect the actual relationships when the driving is strong, with the nonlinear measure of dimension reduction performing best. An example from finance including and excluding the USA index in the global market indices highlights the different performance of the connectivity measures in the presence of hidden source.


2016 ◽  
Vol 4 (4) ◽  
pp. 407-432 ◽  
Author(s):  
RICCARDO RASTELLI ◽  
NIAL FRIEL ◽  
ADRIAN E. RAFTERY

AbstractWe derive properties of latent variable models for networks, a broad class of models that includes the widely used latent position models. We characterize several features of interest, with particular focus on the degree distribution, clustering coefficient, average path length, and degree correlations. We introduce the Gaussian latent position model, and derive analytic expressions and asymptotic approximations for its network properties. We pay particular attention to one special case, the Gaussian latent position model with random effects, and show that it can represent the heavy-tailed degree distributions, positive asymptotic clustering coefficients, and small-world behaviors that often occur in observed social networks. Finally, we illustrate the ability of the models to capture important features of real networks through several well-known datasets.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3429 ◽  
Author(s):  
Chu ◽  
Yuan ◽  
Hu ◽  
Pan ◽  
Pan

With increasing size and flexibility of modern grid-connected wind turbines, advanced control algorithms are urgently needed, especially for multi-degree-of-freedom control of blade pitches and sizable rotor. However, complex dynamics of wind turbines are difficult to be modeled in a simplified state-space form for advanced control design considering stability. In this paper, grey-box parameter identification of critical mechanical models is systematically studied without excitation experiment, and applicabilities of different methods are compared from views of control design. Firstly, through mechanism analysis, the Hammerstein structure is adopted for mechanical-side modeling of wind turbines. Under closed-loop control across the whole wind speed range, structural identifiability of the drive-train model is analyzed in qualitation. Then, mutual information calculation among identified variables is used to quantitatively reveal the relationship between identification accuracy and variables’ relevance. Then, the methods such as subspace identification, recursive least square identification and optimal identification are compared for a two-mass model and tower model. At last, through the high-fidelity simulation demo of a 2 MW wind turbine in the GH Bladed software, multivariable datasets are produced for studying. The results show that the Hammerstein structure is effective for simplify the modeling process where closed-loop identification of a two-mass model without excitation experiment is feasible. Meanwhile, it is found that variables’ relevance has obvious influence on identification accuracy where mutual information is a good indicator. Higher mutual information often yields better accuracy. Additionally, three identification methods have diverse performance levels, showing their application potentials for different control design algorithms. In contrast, grey-box optimal parameter identification is the most promising for advanced control design considering stability, although its simplified representation of complex mechanical dynamics needs additional dynamic compensation which will be studied in future.


2021 ◽  
Author(s):  
Áine Byrne ◽  
James Ross ◽  
Rachel Nicks ◽  
Stephen Coombes

AbstractNeural mass models have been used since the 1970s to model the coarse-grained activity of large populations of neurons. They have proven especially fruitful for understanding brain rhythms. However, although motivated by neurobiological considerations they are phenomenological in nature, and cannot hope to recreate some of the rich repertoire of responses seen in real neuronal tissue. Here we consider a simple spiking neuron network model that has recently been shown to admit an exact mean-field description for both synaptic and gap-junction interactions. The mean-field model takes a similar form to a standard neural mass model, with an additional dynamical equation to describe the evolution of within-population synchrony. As well as reviewing the origins of this next generation mass model we discuss its extension to describe an idealised spatially extended planar cortex. To emphasise the usefulness of this model for EEG/MEG modelling we show how it can be used to uncover the role of local gap-junction coupling in shaping large scale synaptic waves.


Sign in / Sign up

Export Citation Format

Share Document