scholarly journals A probabilistic interpretation of PID controllers using active inference

2018 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher L. Buckley

AbstractIn the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. The Bayesian brain hypothesis, predictive coding, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to unify understandings of life and cognition within general mathematical frameworks derived from information theory, statistical physics and machine learning. Furthermore, it has been argued that one such proposal, active inference, combines both information and control theory and has its roots in cybernetics studies of the brain. The connections between information and control theory have been discussed since the 1950’s by scientists like Shannon and Kalman and have recently risen to prominence in modern stochastic optimal control theory. How-ever, the implications of the confluence of these two theoretical frame-works for the biological sciences have been slow to emerge. Here we argue that if the active inference proposal is to be taken as a general process theory for biological systems, we need to consider how existing control theoretical approaches to biological systems relate to it. In this work we will focus on PID (Proportional-Integral-Derivative) controllers, one of the most common types of regulators employed in engineering and more recently used to explain behaviour in biological systems, e.g. chemotaxis in bacteria and amoebae or robust adaptation in biochemical networks. Using active inference, we derive a probabilistic interpretation of PID controllers, showing how they can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation once we use only simple linear generative models.

Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 257 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher Buckley

In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID (Proportional-Integral-Derivative) control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation also provides a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional.


Author(s):  
Manuel Baltieri ◽  
Christopher L. Buckley

In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation provides also a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional.


Author(s):  
L. Daniel Metz

Motor performance calls into play a number of complex physiological and biological systems. An understanding of the function and behavior of such systems is necessary if motor performance is to be properly analyzed and helpful if it is to be improved. The concepts of systems and control theory offer a powerful (though sometimes not fully exploited) methodological technique for achieving such an understanding. This paper discusses some of the elementary concepts of systems theory as applied to motor performance and presents qualitative discussions of its usefulness in that field.


2019 ◽  
Vol 28 (4) ◽  
pp. 225-239 ◽  
Author(s):  
Maxwell JD Ramstead ◽  
Michael D Kirchhoff ◽  
Karl J Friston

The aim of this article is to clarify how best to interpret some of the central constructs that underwrite the free-energy principle (FEP) – and its corollary, active inference – in theoretical neuroscience and biology: namely, the role that generative models and variational densities play in this theory. We argue that these constructs have been systematically misrepresented in the literature, because of the conflation between the FEP and active inference, on the one hand, and distinct (albeit closely related) Bayesian formulations, centred on the brain – variously known as predictive processing, predictive coding or the prediction error minimisation framework. More specifically, we examine two contrasting interpretations of these models: a structural representationalist interpretation and an enactive interpretation. We argue that the structural representationalist interpretation of generative and recognition models does not do justice to the role that these constructs play in active inference under the FEP. We propose an enactive interpretation of active inference – what might be called enactive inference. In active inference under the FEP, the generative and recognition models are best cast as realising inference and control – the self-organising, belief-guided selection of action policies – and do not have the properties ascribed by structural representationalists.


2017 ◽  
Vol 14 (131) ◽  
pp. 20170096 ◽  
Author(s):  
Paco Calvo ◽  
Karl Friston

In this article we account for the way plants respond to salient features of their environment under the free-energy principle for biological systems. Biological self-organization amounts to the minimization of surprise over time. We posit that any self-organizing system must embody a generative model whose predictions ensure that (expected) free energy is minimized through action. Plants respond in a fast, and yet coordinated manner, to environmental contingencies. They pro-actively sample their local environment to elicit information with an adaptive value. Our main thesis is that plant behaviour takes place by way of a process (active inference) that predicts the environmental sources of sensory stimulation. This principle, we argue, endows plants with a form of perception that underwrites purposeful, anticipatory behaviour. The aim of the article is to assess the prospects of a radical predictive processing story that would follow naturally from the free-energy principle for biological systems; an approach that may ultimately bear upon our understanding of life and cognition more broadly.


Author(s):  
Alain Goriely

Mathematics has an important place in society, but one that is not always obvious. ‘Mathematics, what is it good for? Quarternions, knots, and more DNA’ considers examples of mathematical theories—quarternions and knot theory—that have found unexpected applicability in sciences and engineering. Quaternions, introduced by William Rowan Hamilton in 1843 during his work on complex numbers, are now routinely used in computer graphics, robotics, missile and satellite guidance, as well as in orbital mechanics and control theory. Knot theory has found unexpected applications in the study of many physical and biological systems, including the study of DNA. Another example is the use of number theory in cryptography.


2021 ◽  
Author(s):  
Alexander Tschantz ◽  
Laura Barca ◽  
Domenico Maisto ◽  
Christopher L. Buckley ◽  
Anil K. Seth ◽  
...  

AbstractThe adaptive regulation of bodily and interoceptive parameters, such as body temperature, thirst and hunger is a central problem for any biological organism. Here, we present a series of simulations using the framework of Active Inference to formally characterize interoceptive control and some of its dysfunctions. We start from the premise that the goal of interoceptive control is to minimize a discrepancy between expected and actual interoceptive sensations (i.e., a prediction error or free energy). Importantly, living organisms can achieve this goal by using various forms of interoceptive control: homeostatic, allostatic and goal-directed. We provide a computationally-guided analysis of these different forms of interoceptive control, by showing that they correspond to distinct generative models within Active Inference. Furthermore, we illustrate how these generative models may support empirical research, by predicting physiological and brain signals that may accompany both adaptive and maladaptive interoceptive control.HighlightsWe use Active Inference to provide formal models of interoceptive controlWe model homeostatic, allostatic and goal-directed forms of interoceptive controlOur simulations illustrate both adaptive interoceptive control and its dysfunctionsWe discuss how the models can aid empirical research on interoception


2018 ◽  
Vol 15 (138) ◽  
pp. 20170792 ◽  
Author(s):  
Michael Kirchhoff ◽  
Thomas Parr ◽  
Ensor Palacios ◽  
Karl Friston ◽  
Julian Kiverstein

This work addresses the autonomous organization of biological systems. It does so by considering the boundaries of biological systems, from individual cells to Home sapiens , in terms of the presence of Markov blankets under the active inference scheme—a corollary of the free energy principle. A Markov blanket defines the boundaries of a system in a statistical sense. Here we consider how a collective of Markov blankets can self-assemble into a global system that itself has a Markov blanket; thereby providing an illustration of how autonomous systems can be understood as having layers of nested and self-sustaining boundaries. This allows us to show that: (i) any living system is a Markov blanketed system and (ii) the boundaries of such systems need not be co-extensive with the biophysical boundaries of a living organism. In other words, autonomous systems are hierarchically composed of Markov blankets of Markov blankets—all the way down to individual cells, all the way up to you and me, and all the way out to include elements of the local environment.


TAPPI Journal ◽  
2009 ◽  
Vol 8 (1) ◽  
pp. 4-11
Author(s):  
MOHAMED CHBEL ◽  
LUC LAPERRIÈRE

Pulp and paper processes frequently present nonlinear behavior, which means that process dynam-ics change with the operating points. These nonlinearities can challenge process control. PID controllers are the most popular controllers because they are simple and robust. However, a fixed set of PID tuning parameters is gen-erally not sufficient to optimize control of the process. Problems related to nonlinearities such as sluggish or oscilla-tory response can arise in different operating regions. Gain scheduling is a potential solution. In processes with mul-tiple control objectives, the control strategy must further evaluate loop interactions to decide on the pairing of manipulated and controlled variables that minimize the effect of such interactions and hence, optimize controller’s performance and stability. Using the CADSIM Plus™ commercial simulation software, we developed a Jacobian sim-ulation module that enables automatic bumps on the manipulated variables to calculate process gains at different operating points. These gains can be used in controller tuning. The module also enables the control system designer to evaluate loop interactions in a multivariable control system by calculating the Relative Gain Array (RGA) matrix, of which the Jacobian is an essential part.


Sign in / Sign up

Export Citation Format

Share Document