scholarly journals Bayesian cyclic networks, mutual information and reduced-order Bayesian inference

Author(s):  
Robert K. Niven ◽  
Bernd R. Noack ◽  
Eurika Kaiser ◽  
Louis Cattafesta ◽  
Laurent Cordier ◽  
...  
Information ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 261 ◽  
Author(s):  
Lu

An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions that are typically used in popular methods, membership functions are more convenient to use, providing learning functions that do not suffer the above problem. In Logical Bayesian Inference (LBI), every label is independently learned. For multilabel learning, we can directly obtain a group of optimized membership functions from a large enough sample with labels, without preparing different samples for different labels. Furthermore, a group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions in a two-dimensional feature space,only 2–3 iterations are required for the mutual information between three classes and three labels to surpass 99% of the MMI for most initial partitions For mixture models, the Expectation-Maximization (EM) algorithm is improved to form the CM-EM algorithm, which can outperform the EM algorithm when the mixture ratios are imbalanced, or when local convergence exists. The CM iteration algorithm needs to combine with neural networks for MMI classification in high-dimensional feature spaces. LBI needs further investigation for the unification of statistics and logic.


PAMM ◽  
2021 ◽  
Vol 20 (1) ◽  
Author(s):  
A. Robens-Radermacher ◽  
F. Held ◽  
I. Coelho Lima ◽  
T. Titscher ◽  
J. F. Unger

Author(s):  
Francesco Garita ◽  
Hans Yu ◽  
Matthew P. Juniper

Abstract We combine a thermoacoustic experiment with a thermoacoustic reduced order model using Bayesian inference to accurately learn the parameters of the model, rendering it predictive. The experiment is a vertical Rijke tube containing an electric heater. The heater drives a base flow via natural convection, and thermoacoustic oscillations via velocity-driven heat release fluctuations. The decay rates and frequencies of these oscillations are measured every few seconds by acoustically forcing the system via a loudspeaker placed at the bottom of the tube. More than 320,000 temperature measurements are used to compute state and parameters of the base flow model using the Ensemble Kalman Filter. A wave-based network model is then used to describe the acoustics inside the tube. We balance momentum and energy at the boundary between two adjacent elements, and model the viscous and thermal dissipation mechanisms in the boundary layer and at the heater and thermocouple locations. Finally, we tune the parameters of two different thermoacoustic models on an experimental dataset that comprises more than 40,000 experiments. This study shows that, with thorough Bayesian inference, a qualitative model can become quantitatively accurate, without overfitting, as long as it contains the most influencial physical phenomena.


2017 ◽  
Author(s):  
Il Memming Park ◽  
Jonathan W. Pillow

AbstractThe efficient coding hypothesis, which proposes that neurons are optimized to maximize information about the environment, has provided a guiding theoretical framework for sensory and systems neuroscience. More recently, a theory known as the Bayesian Brain hypothesis has focused on the brain’s ability to integrate sensory and prior sources of information in order to perform Bayesian inference. However, there is as yet no comprehensive theory connecting these two theoretical frameworks. We bridge this gap by formalizing a Bayesian theory of efficient coding. We define Bayesian efficient codes in terms of four basic ingredients: (1) a stimulus prior distribution; (2) an encoding model; (3) a capacity constraint, specifying a neural resource limit; and (4) a loss function, quantifying the desirability or undesirability of various posterior distributions. Classic efficient codes can be seen as a special case in which the loss function is the posterior entropy, leading to a code that maximizes mutual information, but alternate loss functions give solutions that differ dramatically from information-maximizing codes. In particular, we show that decorrelation of sensory inputs, which is optimal under classic efficient codes in low-noise settings, can be disadvantageous for loss functions that penalize large errors. Bayesian efficient coding therefore enlarges the family of normatively optimal codes and provides a more general framework for understanding the design principles of sensory systems. We examine Bayesian efficient codes for linear receptive fields and nonlinear input-output functions, and show that our theory invites reinterpretation of Laughlin’s seminal analysis of efficient coding in the blowfly visual system.One of the primary goals of theoretical neuroscience is to understand the functional organization of neurons in the early sensory pathways and the principles governing them. Why do sensory neurons amplify some signals and filter out others? What can explain the particular configurations and types of neurons found in early sensory system? What general principles can explain the solutions evolution has selected for extracting signals from the sensory environment?Two of the most influential theories for addressing these questions are the “efficient coding” hypothesis and the “Bayesian brain” hypothesis. The efficient coding hypothesis, introduced by Attneave and Barlow more than fifty years ago, uses the ideas from Shannon’s information theory to formulate a theory normatively optimal neural coding [1, 2]. The Bayesian brain hypothesis, on the other hand, focuses on the brain’s ability to perform Bayesian inference, and can be traced back to ideas from Helmholtz about optimal perceptual inference [3–7].A substantial literature has sought to alter or expand the original efficient coding hypothesis [5, 8–18], and a large number of papers have considered optimal codes in the context of Bayesian inference [19–26], However, the two theories have never been formally connected within a single, comprehensive theoretical framework. Here we propose to fill this gap by formulating a general Bayesian theory of efficient coding that unites the two hypotheses. We begin by reviewing the key elements of each theory and then describe a framework for unifying them. Our approach involves combining a prior and model-based likelihood function with a neural resource constraint and a loss functional that quantifies what makes for a “good” posterior distribution. We show that classic efficient codes arise when we use information-theoretic quantities for these ingredients, but that a much larger family of Bayesian efficient codes can be constructed by allowing these ingredients to vary. We explore Bayesian efficient codes for several important cases of interest, namely linear receptive fields and nonlinear response functions. The latter case was examined in an influential paper by Laughlin that examined contrast coding in the blowfly large monopolar cells (LMCs) [27]; we reanalyze data from this paper and argue that LMC responses are in fact better described as minimizing the average square-root error than as maximizing mutual information.


Author(s):  
Francesco Garita ◽  
Hans Yu ◽  
Matthew Juniper

Abstract We combine a thermoacoustic experiment with a thermoacoustic reduced order model using Bayesian inference to accurately learn the parameters of the model, rendering it predictive. The experiment is a vertical Rijke tube containing an electric heater. The heater drives a base flow via natural convection, and thermoacoustic oscillations via velocity-driven heat release fluctuations. The decay rates and frequencies of these oscillations are measured every few seconds by acoustically forcing the system via a loudspeaker placed at the bottom of the tube. More than 320,000 temperature measurements are used to compute state and parameters of the base flow model using the Ensemble Kalman Filter. A wave-based network model is then used to describe the acoustics inside the tube. We balance momentum and energy at the boundary between two adjacent elements, and model the viscous and thermal dissipation mechanisms in the boundary layer and at the heater and thermocouple locations. Finally, we tune the parameters of two different thermoacoustic models on an experimental dataset that comprises more than 40,000 experiments. This study shows that, with thorough Bayesian inference, a qualitative model can become quantitatively accurate, without overfitting, as long as it contains the most influential physical phenomena.


2020 ◽  
Author(s):  
Francesco Garita ◽  
Hans Yu ◽  
Matthew Juniper

We combine a thermoacoustic experiment with a thermoacoustic reduced order model using Bayesian inference to accurately learn the parameters of the model, rendering it predictive. The experiment is a vertical Rijke tube containing an electric heater. The heater drives a base flow via natural convection, and thermoacoustic oscillations via velocity-driven heat release fluctuations. The decay rates and frequencies of these oscillations are measured every few seconds by acoustically forcing the system via a loudspeaker placed at the bottom of the tube. More than 320,000 temperature measurements are used to compute state and parameters of the base flow model using the Ensemble Kalman Filter. A wave-based network model is then used to describe the acoustics inside the tube. We balance momentum and energy at the boundary between two adjacent elements, and model the viscous and thermal dissipation mechanisms in the boundary layer and at the heater and thermocouple locations. Finally, we tune the parameters of two different thermoacoustic models on an experimental dataset that comprises more than 40,000 experiments. This study shows that, with thorough Bayesian inference, a qualitative model can become quantitatively accurate, without overfitting, as long as it contains the most influencial physical phenomena.


Sign in / Sign up

Export Citation Format

Share Document