scholarly journals Complexity as Causal Information Integration

Entropy ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. 1107
Author(s):  
Carlotta Langer ◽  
Nihat Ay

Complexity measures in the context of the Integrated Information Theory of consciousness try to quantify the strength of the causal connections between different neurons. This is done by minimizing the KL-divergence between a full system and one without causal cross-connections. Various measures have been proposed and compared in this setting. We will discuss a class of information geometric measures that aim at assessing the intrinsic causal cross-influences in a system. One promising candidate of these measures, denoted by ΦCIS, is based on conditional independence statements and does satisfy all of the properties that have been postulated as desirable. Unfortunately it does not have a graphical representation, which makes it less intuitive and difficult to analyze. We propose an alternative approach using a latent variable, which models a common exterior influence. This leads to a measure ΦCII, Causal Information Integration, that satisfies all of the required conditions. Our measure can be calculated using an iterative information geometric algorithm, the em-algorithm. Therefore we are able to compare its behavior to existing integrated information measures.

2018 ◽  
Author(s):  
So Nakashima ◽  
Yuki Sughiyama ◽  
Tetsuya J. Kobayashi

Phenotypic variability in a population of cells can work as the bet-hedging of the cells under an unpredictably changing environment, the typical example of which is the bacterial persistence. To understand the strategy to control such phenomena, it is indispensable to identify the phenotype of each cell and its inheritance. Although recent advancements in microfluidic technology offer us useful lineage data, they are insufficient to directly identify the phenotypes of the cells. An alternative approach is to infer the phenotype from the lineage data by latent-variable estimation. To this end, however, we must resolve the bias problem in the inference from lineage called survivorship bias. In this work, we clarify how the survivor bias distorts statistical estimations. We then propose a latent-variable estimation algorithm without the survivorship bias from lineage trees based on an expectation-maximization (EM) algorithm, which we call Lineage EM algorithm (LEM). LEM provides a statistical method to identify the traits of the cells applicable to various kinds of lineage data.


2020 ◽  
Vol 12 ◽  
pp. 184797901989931 ◽  
Author(s):  
Soontorn Lancharoen ◽  
Poonpong Suksawang ◽  
Thanakorn Naenna

This study contributes to the promotion of healthcare information integration and readiness assessment of the factors impacted by quality improvement in hospital performance, which is beneficial for developing the healthcare industry because errors or integrated information can significantly affect the safety of patients and their confidence in the healthcare system. This research method is proposed to identify and confirm capability factors after readiness assessment with empirical testing, and the data were collected from hospitals in Thailand. An analytic network process was used as a tool for calculating and testing the readiness assessment of the integrated information results. The results show factor improvement of information integration and effects on the performance of hospitals in the healthcare industry. Three capability factors were found to have a significant impact on information integration and hospital performance. The model analysis suggests that the identified capability factors (organizational, group and individual) should be improved with regard to information integration, which is used to evaluate performance in the healthcare industry, and this risk assessment may be useful in other relevant industries.


2020 ◽  
Vol 30 (8) ◽  
pp. 4563-4580 ◽  
Author(s):  
Andrés Canales-Johnson ◽  
Alexander J Billig ◽  
Francisco Olivares ◽  
Andrés Gonzalez ◽  
María del Carmen Garcia ◽  
...  

Abstract At any given moment, we experience a perceptual scene as a single whole and yet we may distinguish a variety of objects within it. This phenomenon instantiates two properties of conscious perception: integration and differentiation. Integration is the property of experiencing a collection of objects as a unitary percept and differentiation is the property of experiencing these objects as distinct from each other. Here, we evaluated the neural information dynamics underlying integration and differentiation of perceptual contents during bistable perception. Participants listened to a sequence of tones (auditory bistable stimuli) experienced either as a single stream (perceptual integration) or as two parallel streams (perceptual differentiation) of sounds. We computed neurophysiological indices of information integration and information differentiation with electroencephalographic and intracranial recordings. When perceptual alternations were endogenously driven, the integrated percept was associated with an increase in neural information integration and a decrease in neural differentiation across frontoparietal regions, whereas the opposite pattern was observed for the differentiated percept. However, when perception was exogenously driven by a change in the sound stream (no bistability), neural oscillatory power distinguished between percepts but information measures did not. We demonstrate that perceptual integration and differentiation can be mapped to theoretically motivated neural information signatures, suggesting a direct relationship between phenomenology and neurophysiology.


2002 ◽  
Vol 27 (3) ◽  
pp. 291-317 ◽  
Author(s):  
Natasha Rossi ◽  
Xiaohui Wang ◽  
James O. Ramsay

The methods of functional data analysis are used to estimate item response functions (IRFs) nonparametrically. The EM algorithm is used to maximize the penalized marginal likelihood of the data. The penalty controls the smoothness of the estimated IRFs, and is chosen so that, as the penalty is increased, the estimates converge to shapes closely represented by the three-parameter logistic family. The one-dimensional latent trait model is recast as a problem of estimating a space curve or manifold, and, expressed in this way, the model no longer involves any latent constructs, and is invariant with respect to choice of latent variable. Some results from differential geometry are used to develop a data-anchored measure of ability and a new technique for assessing item discriminability. Functional data-analytic techniques are used to explore the functional variation in the estimated IRFs. Applications involving simulated and actual data are included.


Author(s):  
David Darmon ◽  
Tomas Watanabe ◽  
Christopher Cellucci ◽  
Paul E Rapp

Multichannel EEGs were obtained from healthy participants in the eyes-closed no-task condition (where the alpha component is typically abolished). EEG dynamics in the two conditions were quantified with two related binary Lempel-Ziv measures of the first principal component and with three measures of integrated information including the more recently proposed integrated synergy. Both integrated information and integrated synergy with model order p=1 had greater values in the eyes closed condition. If the model order of integrated synergy was determined with the Bayesian Information Criterion, this pattern was reversed, and in common with other measures, integrated synergy was greater in the eyes open condition. Eyes open versus eyes closed separation was quantified by calculation of the between-condition effect size. Lempel-Ziv complexity of the first principal component showed greater separation than the measures of integrated information. The performance of the integrated information measures investigated here when distinguishing between indisputably different physiological states encourages caution when advocating for their use as measures of consciousness.


2020 ◽  
Author(s):  
Andrea I. Luppi ◽  
Pedro A.M. Mediano ◽  
Fernando E. Rosas ◽  
Judith Allanson ◽  
John D. Pickard ◽  
...  

AbstractA central goal of neuroscience is to understand how the brain synthesises information from multiple inputs to give rise to a unified conscious experience. This process is widely believed to require integration of information. Here, we combine information theory and network science to address two fundamental questions: how is the human information-processing architecture functionally organised? And how does this organisation support human consciousness? To address these questions, we leverage the mathematical framework of Integrated Information Decomposition to delineate a cognitive architecture wherein specialised modules interact with a “synergistic global workspace,” comprising functionally distinct gateways and broadcasters. Gateway regions gather information from the specialised modules for processing in the synergistic workspace, whose contents are then further integrated to later be made widely available by broadcasters. Through data-driven analysis of resting-state functional MRI, we reveal that gateway regions correspond to the brain’s well-known default mode network, whereas broadcasters of information coincide with the executive control network. Demonstrating that this synergistic workspace supports human consciousness, we further apply Integrated Information Decomposition to BOLD signals to compute integrated information across the brain. By comparing changes due to propofol anaesthesia and severe brain injury, we demonstrate that most changes in integrated information happen within the synergistic workspace. Furthermore, it was found that loss of consciousness corresponds to reduced integrated information between gateway, but not broadcaster, regions of the synergistic workspace. Thus, loss of consciousness may coincide with breakdown of information integration by this synergistic workspace of the human brain. Together, these findings demonstrate that refining our understanding of information-processing in the human brain through Integrated Information Decomposition can provide powerful insights into the human neurocognitive architecture, and its role in supporting consciousness.


Entropy ◽  
2018 ◽  
Vol 20 (3) ◽  
pp. 173 ◽  
Author(s):  
Jun Kitazono ◽  
Ryota Kanai ◽  
Masafumi Oizumi

The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time.


2018 ◽  
Vol 119 (3) ◽  
pp. 834-848 ◽  
Author(s):  
Alexis T. Baria ◽  
Maria V. Centeno ◽  
Mariam E. Ghantous ◽  
Pei C. Chang ◽  
Daniele Procissi ◽  
...  

Even though a number of findings, based on information content or information integration, are shown to define neural underpinnings characteristic of a conscious experience, the neurophysiological mechanism of consciousness is still poorly understood. Here, we investigated the brain activity and functional connectivity changes that occur in the isoflurane-anesthetized unconscious state in contrast to the awake state in rats (awake and/or anesthetized, n = 68 rats). We examined nine information measures previously shown to distinguish between conscious states: blood oxygen level-dependent (BOLD) variability, functional connectivity strength, modularity, weighted modularity, efficiency, clustering coefficient, small-worldness, and spatial and temporal Lempel-Ziv complexity measure. We also identified modular membership, seed-based network connectivity, and absolute and normalized power spectrums to assess the integrity of the BOLD functional networks between awake and anesthesia. fMRI BOLD variability and related absolute power were the only information measures significantly higher during the awake state compared with isoflurane anesthesia across animals, and with varying levels of anesthesia, after correcting for motion and respiration confounds. Thus, we conclude that, at least under the specific conditions examined here, global measures of information integration/sharing do not properly distinguish the anesthetized state from wakefulness, and heightened overall, global and local, BOLD variability is the most reliable determinant of conscious brain activity relative to isoflurane anesthesia. NEW & NOTEWORTHY Multiple metrics previously suggested to be able to distinguish between states of consciousness were compared, within and across rats in awake and isoflurane anesthesia-induced unconsciousness. All measures tested showed sensitivity to confounds, correcting for motion and for respiration changes due to anesthesia. Resting state local BOLD variability and the related absolute power were the only information measures that robustly differentiated wakefulness states. These results caution against the general applicability of global information measures in identifying levels of consciousness, thus challenging the popular concept that these measures reflect states of consciousness, and also pointing to local signal variability as a more reliable indicator of states of wakefulness.


Sign in / Sign up

Export Citation Format

Share Document