scholarly journals Optimal Experimental Design Using a Consistent Bayesian Approach

Author(s):  
Scott N. Walsh ◽  
Tim M. Wildey ◽  
John D. Jakeman

We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.

2020 ◽  
Vol 53 (3) ◽  
pp. 800-810
Author(s):  
Frank Heinrich ◽  
Paul A. Kienzle ◽  
David P. Hoogerheide ◽  
Mathias Lösche

A framework is applied to quantify information gain from neutron or X-ray reflectometry experiments [Treece, Kienzle, Hoogerheide, Majkrzak, Lösche & Heinrich (2019). J. Appl. Cryst. 52, 47–59], in an in-depth investigation into the design of scattering contrast in biological and soft-matter surface architectures. To focus the experimental design on regions of interest, the marginalization of the information gain with respect to a subset of model parameters describing the structure is implemented. Surface architectures of increasing complexity from a simple model system to a protein–lipid membrane complex are simulated. The information gain from virtual surface scattering experiments is quantified as a function of the scattering length density of molecular components of the architecture and the surrounding aqueous bulk solvent. It is concluded that the information gain is mostly determined by the local scattering contrast of a feature of interest with its immediate molecular environment, and experimental design should primarily focus on this region. The overall signal-to-noise ratio of the measured reflectivity modulates the information gain globally and is a second factor to be taken into consideration.


2016 ◽  
Vol 2016 ◽  
pp. 1-14
Author(s):  
Lin-Ping Song ◽  
Leonard R. Pasion ◽  
Nicolas Lhomme ◽  
Douglas W. Oldenburg

This work, under the optimal experimental design framework, investigates the sensor placement problem that aims to guide electromagnetic induction (EMI) sensing of multiple objects. We use the linearized model covariance matrix as a measure of estimation error to present a sequential experimental design (SED) technique. The technique recursively minimizes data misfit to update model parameters and maximizes an information gain function for a future survey relative to previous surveys. The fundamental process of the SED seeks to increase weighted sensitivities to targets when placing sensors. The synthetic and field experiments demonstrate that SED can be used to guide the sensing process for an effective interrogation. It also can serve as a theoretic basis to improve empirical survey operation. We further study the sensitivity of the SED to the number of objects within the sensing range. The tests suggest that an appropriately overrepresented model about expected anomalies might be a feasible choice.


2015 ◽  
Vol 8 (3) ◽  
pp. 791-804 ◽  
Author(s):  
J. Reimer ◽  
M. Schuerch ◽  
T. Slawig

Abstract. The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.


Processes ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 27 ◽  
Author(s):  
René Schenkendorf ◽  
Xiangzhong Xie ◽  
Moritz Rehbein ◽  
Stephan Scholl ◽  
Ulrike Krewer

In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, and they need to be calibrated to the process under study with suitable model parameter estimates. Often, the model parameters identified by experimental data, however, pose severe uncertainties leading to incorrect or biased inferences. This applies in particular in the field of pharmaceutical manufacturing, where usually the measurement data are limited in quantity and quality when analyzing novel active pharmaceutical ingredients. Optimally designed experiments, in turn, aim to increase the quality of the gathered data in the most efficient way. Any improvement in data quality results in more precise parameter estimates and more reliable model candidates. The applied methods for parameter sensitivity analyses and design criteria are crucial for the effectiveness of the optimal experimental design. In this work, different design measures based on global parameter sensitivities are critically compared with state-of-the-art concepts that follow simplifying linearization principles. The efficient implementation of the proposed sensitivity measures is explicitly addressed to be applicable to complex chemical engineering problems of practical relevance. As a case study, the homogeneous synthesis of 3,4-dihydro-1H-1-benzazepine-2,5-dione, a scaffold for the preparation of various protein kinase inhibitors, is analyzed followed by a more complex model of biochemical reactions. In both studies, the model-based optimal experimental design benefits from global parameter sensitivities combined with proper design measures.


Author(s):  
Panagiotis Tsilifis ◽  
Ilias Bilionis ◽  
Ioannis Katsounaros ◽  
Nicholas Zabaras

The major drawback of the Bayesian approach to model calibration is the computational burden involved in describing the posterior distribution of the unknown model parameters arising from the fact that typical Markov chain Monte Carlo (MCMC) samplers require thousands of forward model evaluations. In this work, we develop a variational Bayesian approach to model calibration which uses an information theoretic criterion to recast the posterior problem as an optimization problem. Specifically, we parameterize the posterior using the family of Gaussian mixtures and seek to minimize the information loss incurred by replacing the true posterior with an approximate one. Our approach is of particular importance in underdetermined problems with expensive forward models in which both the classical approach of minimizing a potentially regularized misfit function and MCMC are not viable options. We test our methodology on two surrogate-free examples and show that it dramatically outperforms MCMC methods.


1985 ◽  
Vol 248 (3) ◽  
pp. R378-R386 ◽  
Author(s):  
M. H. Nathanson ◽  
G. M. Saidel

Optimal experimental design is used to predict the experimental conditions that will allow the "best" estimates of model parameters. A variety of criteria must be considered before an optimal design is chosen. Maximizing the determinant of the information matrix (D optimality), which tends to produce the most precise simultaneous estimates of all parameters, is commonly considered as the primary criterion. To complement this criterion, we present another whose effect is to reduce the interaction among the parameter estimates so that changes in any one parameter can be more distinct. This new criterion consists of maximizing the determinant of an appropriately scaled information matrix (M optimality). These criteria are applied jointly in a multiple-objective function. To illustrate the use of these concepts, we develop an optimal experimental design of blood sampling schedules using a detailed ferrokinetic model.


2021 ◽  
Vol 2021 (12) ◽  
pp. 124001
Author(s):  
Dominik Linzner ◽  
Heinz Koeppl

Abstract We consider the problem of learning structures and parameters of continuous-time Bayesian networks (CTBNs) from time-course data under minimal experimental resources. In practice, the cost of generating experimental data poses a bottleneck, especially in the natural and social sciences. A popular approach to overcome this is Bayesian optimal experimental design (BOED). However, BOED becomes infeasible in high-dimensional settings, as it involves integration over all possible experimental outcomes. We propose a novel criterion for experimental design based on a variational approximation of the expected information gain. We show that for CTBNs, a semi-analytical expression for this criterion can be calculated for structure and parameter learning. By doing so, we can replace sampling over experimental outcomes by solving the CTBNs master-equation, for which scalable approximations exist. This alleviates the computational burden of integrating over possible experimental outcomes in high-dimensions. We employ this framework in order to recommend interventional sequences. In this context, we extend the CTBN model to conditional CTBNs in order to incorporate interventions. We demonstrate the performance of our criterion on synthetic and real-world data.


Sign in / Sign up

Export Citation Format

Share Document