scholarly journals PID Control as a Process of Active Inference with Linear Generative Models

Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 257 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher Buckley

In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID (Proportional-Integral-Derivative) control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation also provides a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional.

Author(s):  
Manuel Baltieri ◽  
Christopher L. Buckley

In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation provides also a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional.


2018 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher L. Buckley

AbstractIn the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. The Bayesian brain hypothesis, predictive coding, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to unify understandings of life and cognition within general mathematical frameworks derived from information theory, statistical physics and machine learning. Furthermore, it has been argued that one such proposal, active inference, combines both information and control theory and has its roots in cybernetics studies of the brain. The connections between information and control theory have been discussed since the 1950’s by scientists like Shannon and Kalman and have recently risen to prominence in modern stochastic optimal control theory. How-ever, the implications of the confluence of these two theoretical frame-works for the biological sciences have been slow to emerge. Here we argue that if the active inference proposal is to be taken as a general process theory for biological systems, we need to consider how existing control theoretical approaches to biological systems relate to it. In this work we will focus on PID (Proportional-Integral-Derivative) controllers, one of the most common types of regulators employed in engineering and more recently used to explain behaviour in biological systems, e.g. chemotaxis in bacteria and amoebae or robust adaptation in biochemical networks. Using active inference, we derive a probabilistic interpretation of PID controllers, showing how they can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation once we use only simple linear generative models.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 198
Author(s):  
Stephen Fox

Active inference is a physics of life process theory of perception, action and learning that is applicable to natural and artificial agents. In this paper, active inference theory is related to different types of practice in social organization. Here, the term social organization is used to clarify that this paper does not encompass organization in biological systems. Rather, the paper addresses active inference in social organization that utilizes industrial engineering, quality management, and artificial intelligence alongside human intelligence. Social organization referred to in this paper can be in private companies, public institutions, other for-profit or not-for-profit organizations, and any combination of them. The relevance of active inference theory is explained in terms of variational free energy, prediction errors, generative models, and Markov blankets. Active inference theory is most relevant to the social organization of work that is highly repetitive. By contrast, there are more challenges involved in applying active inference theory for social organization of less repetitive endeavors such as one-of-a-kind projects. These challenges need to be addressed in order for active inference to provide a unifying framework for different types of social organization employing human and artificial intelligence.


2021 ◽  
Author(s):  
David Harris ◽  
Tom Arthur

This paper examines the application of active inference to naturalistic visuomotor control. Active inference proposes that actions serve to minimise future prediction errors and are dynamically adjusted according to uncertainty about sensory information, predictions, or the environment. We investigated whether predictive gaze behaviours are indeed adjusted in this Bayes-optimal fashion during a virtual racquetball task. In this task, participants intercepted bouncing balls with varying levels of elasticity, under conditions of high and low environmental volatility. Participants’ gaze patterns differed between stable and volatile conditions in a manner consistent with generative models of Bayes-optimal behaviour. Partially observable Markov models also revealed an increased rate of associative learning in response to unpredictable shifts in environmental probabilities, although there was no overall effect of volatility on this parameter. Findings extend active inference frameworks into complex and unconstrained visuomotor tasks and present important implications for a neurocomputational understanding of the visual guidance of action.


2017 ◽  
Vol 14 (131) ◽  
pp. 20170096 ◽  
Author(s):  
Paco Calvo ◽  
Karl Friston

In this article we account for the way plants respond to salient features of their environment under the free-energy principle for biological systems. Biological self-organization amounts to the minimization of surprise over time. We posit that any self-organizing system must embody a generative model whose predictions ensure that (expected) free energy is minimized through action. Plants respond in a fast, and yet coordinated manner, to environmental contingencies. They pro-actively sample their local environment to elicit information with an adaptive value. Our main thesis is that plant behaviour takes place by way of a process (active inference) that predicts the environmental sources of sensory stimulation. This principle, we argue, endows plants with a form of perception that underwrites purposeful, anticipatory behaviour. The aim of the article is to assess the prospects of a radical predictive processing story that would follow naturally from the free-energy principle for biological systems; an approach that may ultimately bear upon our understanding of life and cognition more broadly.


2021 ◽  
Author(s):  
Alexander Tschantz ◽  
Laura Barca ◽  
Domenico Maisto ◽  
Christopher L. Buckley ◽  
Anil K. Seth ◽  
...  

AbstractThe adaptive regulation of bodily and interoceptive parameters, such as body temperature, thirst and hunger is a central problem for any biological organism. Here, we present a series of simulations using the framework of Active Inference to formally characterize interoceptive control and some of its dysfunctions. We start from the premise that the goal of interoceptive control is to minimize a discrepancy between expected and actual interoceptive sensations (i.e., a prediction error or free energy). Importantly, living organisms can achieve this goal by using various forms of interoceptive control: homeostatic, allostatic and goal-directed. We provide a computationally-guided analysis of these different forms of interoceptive control, by showing that they correspond to distinct generative models within Active Inference. Furthermore, we illustrate how these generative models may support empirical research, by predicting physiological and brain signals that may accompany both adaptive and maladaptive interoceptive control.HighlightsWe use Active Inference to provide formal models of interoceptive controlWe model homeostatic, allostatic and goal-directed forms of interoceptive controlOur simulations illustrate both adaptive interoceptive control and its dysfunctionsWe discuss how the models can aid empirical research on interoception


2018 ◽  
Vol 15 (138) ◽  
pp. 20170792 ◽  
Author(s):  
Michael Kirchhoff ◽  
Thomas Parr ◽  
Ensor Palacios ◽  
Karl Friston ◽  
Julian Kiverstein

This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens , in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment.


2019 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher Buckley

The free energy principle describes cognitive functions such as perception, action, learning and attention in terms of surprisal minimisation. Under simplifying assumptions, agents are depicted as systems minimising a weighted sum of prediction errors encoding the mismatch between incoming sensations and an agent's predictions about such sensations. The ``dark room'' is defined as a state that an agent would occupy should it only look to minimise this sum of prediction errors. This (paradoxical) state emerges as the contrast between the attempts to describe the richness of human and animal behaviour in terms of surprisal minimisation and the trivial solution of a dark room, where the complete lack of sensory stimuli would provide the easiest way to minimise prediction errors, i.e., to be in a perfectly predictable state of darkness with no incoming stimuli. Using a process theory derived from the free energy principle, active inference, we investigate with an agent-based model the meaning of the dark room problem and discuss some of its implications for natural and artificial systems. In this set up, we propose that the presence of this paradox is primarily due to the long-standing belief that agents should encode accurate world models, typical of traditional (computational) theories of cognition.


2021 ◽  
pp. 1-63
Author(s):  
Jelle Bruineberg ◽  
Krzysztof Dolega ◽  
Joe Dewhurst ◽  
Manuel Baltieri

Abstract The free energy principle, an influential framework in computational neuroscience and theoretical neurobiology, starts from the assumption that living systems ensure adaptive exchanges with their environment by minimizing the objective function of variational free energy. Following this premise, it claims to deliver a promising integration of the life sciences. In recent work, Markov Blankets, one of the central constructs of the free energy principle, have been applied to resolve debates central to philosophy (such as demarcating the boundaries of the mind). The aim of this paper is twofold. First, we trace the development of Markov blankets starting from their standard application in Bayesian networks, via variational inference, to their use in the literature on active inference. We then identify a persistent confusion in the literature between the formal use of Markov blankets as an epistemic tool for Bayesian inference, and their novel metaphysical use in the free energy framework to demarcate the physical boundary between an agent and its environment. Consequently, we propose to distinguish between ‘Pearl blankets’ to refer to the original epistemic use of Markov blankets and ‘Friston blankets’ to refer to the new metaphysical construct. Second, we use this distinction to critically assess claims resting on the application of Markov blankets to philosophical problems. We suggest that this literature would do well in differentiating between two different research programs: ‘inference with a model’ and ‘inference within a model’. Only the latter is capable of doing metaphysical work with Markov blankets, but requires additional philosophical premises and cannot be justified by an appeal to the success of the mathematical framework alone.


2021 ◽  
Vol 15 ◽  
Author(s):  
Hideyoshi Yanagisawa

Appropriate levels of arousal potential induce hedonic responses (i.e., emotional valence). However, the relationship between arousal potential and its factors (e.g., novelty, complexity, and uncertainty) have not been formalized. This paper proposes a mathematical model that explains emotional arousal using minimized free energy to represent information content processed in the brain after sensory stimuli are perceived and recognized (i.e., sensory surprisal). This work mathematically demonstrates that sensory surprisal represents the summation of information from novelty and uncertainty, and that the uncertainty converges to perceived complexity with sufficient sampling from a stimulus source. Novelty, uncertainty, and complexity all act as collative properties that form arousal potential. Analysis using a Gaussian generative model shows that the free energy is formed as a quadratic function of prediction errors based on the difference between prior expectation and peak of likelihood. The model predicts two interaction effects on free energy: that between prediction error and prior uncertainty (i.e., prior variance) and that between prediction error and sensory variance. A discussion on the potential of free energy as a mathematical principle is presented to explain emotion initiators. The model provides a general mathematical framework for understanding and predicting the emotions caused by novelty, uncertainty, and complexity. The mathematical model of arousal can help predict acceptable novelty and complexity based on a target population under different uncertainty levels mitigated by prior knowledge and experience.


Sign in / Sign up

Export Citation Format

Share Document