scholarly journals Data-Driven Uncertainty Quantification for Cardiac Electrophysiological Models: Impact of Physiological Variability on Action Potential and Spiral Wave Dynamics

2020 ◽  
Vol 11 ◽  
Author(s):  
Pras Pathmanathan ◽  
Suran K. Galappaththige ◽  
Jonathan M. Cordeiro ◽  
Abouzar Kaboudian ◽  
Flavio H. Fenton ◽  
...  

Computational modeling of cardiac electrophysiology (EP) has recently transitioned from a scientific research tool to clinical applications. To ensure reliability of clinical or regulatory decisions made using cardiac EP models, it is vital to evaluate the uncertainty in model predictions. Model predictions are uncertain because there is typically substantial uncertainty in model input parameters, due to measurement error or natural variability. While there has been much recent uncertainty quantification (UQ) research for cardiac EP models, all previous work has been limited by either: (i) considering uncertainty in only a subset of the full set of parameters; and/or (ii) assigning arbitrary variation to parameters (e.g., ±10 or 50% around mean value) rather than basing the parameter uncertainty on experimental data. In our recent work we overcame the first limitation by performing UQ and sensitivity analysis using a novel canine action potential model, allowing all parameters to be uncertain, but with arbitrary variation. Here, we address the second limitation by extending our previous work to use data-driven estimates of parameter uncertainty. Overall, we estimated uncertainty due to population variability in all parameters in five currents active during repolarization: inward potassium rectifier, transient outward potassium, L-type calcium, rapidly and slowly activating delayed potassium rectifier; 25 parameters in total (all model parameters except fast sodium current parameters). A variety of methods was used to estimate the variability in these parameters. We then propagated the uncertainties through the model to determine their impact on predictions of action potential shape, action potential duration (APD) prolongation due to drug block, and spiral wave dynamics. Parameter uncertainty had a significant effect on model predictions, especially L-type calcium current parameters. Correlation between physiological parameters was determined to play a role in physiological realism of action potentials. Surprisingly, even model outputs that were relative differences, specifically drug-induced APD prolongation, were heavily impacted by the underlying uncertainty. This is the first data-driven end-to-end UQ analysis in cardiac EP accounting for uncertainty in the vast majority of parameters, including first in tissue, and demonstrates how future UQ could be used to ensure model-based decisions are robust to all underlying parameter uncertainties.

Author(s):  
Georg A. Mensah ◽  
Luca Magri ◽  
Jonas P. Moeck

Thermoacoustic instabilities are a major threat for modern gas turbines. Frequency-domain-based stability methods, such as network models and Helmholtz solvers, are common design tools because they are fast compared to compressible flow computations. They result in an eigenvalue problem, which is nonlinear with respect to the eigenvalue. Thus, the influence of the relevant parameters on mode stability is only given implicitly. Small changes in some model parameters, may have a great impact on stability. The assessment of how parameter uncertainties propagate to system stability is therefore crucial for safe gas turbine operation. This question is addressed by uncertainty quantification. A common strategy for uncertainty quantification in thermoacoustics is risk factor analysis. One general challenge regarding uncertainty quantification is the sheer number of uncertain parameter combinations to be quantified. For instance, uncertain parameters in an annular combustor might be the equivalence ratio, convection times, geometrical parameters, boundary impedances, flame response model parameters, etc. A new and fast way to obtain algebraic parameter models in order to tackle the implicit nature of the problem is using adjoint perturbation theory. This paper aims to further utilize adjoint methods for the quantification of uncertainties. This analytical method avoids the usual random Monte Carlo (MC) simulations, making it particularly attractive for industrial purposes. Using network models and the open-source Helmholtz solver PyHoltz, it is also discussed how to apply the method with standard modeling techniques. The theory is exemplified based on a simple ducted flame and a combustor of EM2C laboratory for which experimental data are available.


2002 ◽  
Vol 282 (6) ◽  
pp. H2296-H2308 ◽  
Author(s):  
O. Bernus ◽  
R. Wilders ◽  
C. W. Zemlin ◽  
H. Verschelde ◽  
A. V. Panfilov

Recent experimental and theoretical results have stressed the importance of modeling studies of reentrant arrhythmias in cardiac tissue and at the whole heart level. We introduce a six-variable model obtained by a reformulation of the Priebe-Beuckelmann model of a single human ventricular cell. The reformulated model is 4.9 times faster for numerical computations and it is more stable than the original model. It retains the action potential shape at various frequencies, restitution of action potential duration, and restitution of conduction velocity. We were able to reproduce the main properties of epicardial, endocardial, and M cells by modifying selected ionic currents. We performed a simulation study of spiral wave behavior in a two-dimensional sheet of human ventricular tissue and showed that spiral waves have a frequency of 3.3 Hz and a linear core of ∼50-mm diameter that rotates with an average frequency of 0.62 rad/s. Simulation results agreed with experimental data. In conclusion, the proposed model is suitable for efficient and accurate studies of reentrant phenomena in human ventricular tissue.


2020 ◽  
Author(s):  
Jonas Sukys ◽  
Marco Bacci

<div> <div>SPUX (Scalable Package for Uncertainty Quantification in "X") is a modular framework for Bayesian inference and uncertainty quantification. The SPUX framework aims at harnessing high performance scientific computing to tackle complex aquatic dynamical systems rich in intrinsic uncertainties,</div> <div>such as ecological ecosystems, hydrological catchments, lake dynamics, subsurface flows, urban floods, etc. The challenging task of quantifying input, output and/or parameter uncertainties in such stochastic models is tackled using Bayesian inference techniques, where numerical sampling and filtering algorithms assimilate prior expert knowledge and available experimental data. The SPUX framework greatly simplifies uncertainty quantification for realistic computationally costly models and provides an accessible, modular, portable, scalable, interpretable and reproducible scientific workflow. To achieve this, SPUX can be coupled to any serial or parallel model written in any programming language (e.g. Python, R, C/C++, Fortran, Java), can be installed either on a laptop or on a parallel cluster, and has built-in support for automatic reports, including algorithmic and computational performance metrics. I will present key SPUX concepts using a simple random walk example, and showcase recent realistic applications for catchment and lake models. In particular, uncertainties in model parameters, meteorological inputs, and data observation processes are inferred by assimilating available in-situ and remotely sensed datasets.</div> </div>


Author(s):  
Samuel R Kuo ◽  
Natalia A Trayanova

Atrial fibrillation (AF) is believed to be perpetuated by recirculating spiral waves. Atrial structures are often characterized with action potentials of varying morphologies; however, the role of the structure-dependent atrial electrophysiological heterogeneity in spiral wave behaviour is not well understood. The purpose of this study is to determine the effect of action potential morphology heterogeneity associated with the major atrial structures in spiral wave maintenance. The present study also focuses on how this effect is further modulated by the presence of the inherent periodicity in atrial structure. The goals of the study are achieved through the simulation of electrical behaviour in a two-dimensional atrial tissue model that incorporates the representation of action potentials in various structurally distinct regions in the right atrium. Periodic boundary conditions are then imposed to form a cylinder (quasi three-dimensional), thus allowing exploration of the additional effect of structure periodicity on spiral wave behaviour. Transmembrane potential maps and phase singularity traces are analysed to determine effects on spiral wave behaviour. Results demonstrate that the prolonged refractoriness of the crista terminalis (CT) affects the pattern of spiral wave reentry, while the variation in action potential morphology of the other structures does not. The CT anchors the spiral waves, preventing them from drifting away. Spiral wave dynamics is altered when the ends of the sheet are spliced together to form a cylinder. The main effect of the continuous surface is the generation of secondary spiral waves which influences the primary rotors. The interaction of the primary and secondary spiral waves decreased as cylinder diameter increased.


2021 ◽  
Author(s):  
Bruno V Rego ◽  
Dar Weiss ◽  
Matthew R Bersi ◽  
Jay D Humphrey

Quantitative estimation of local mechanical properties remains critically important in the ongoing effort to elucidate how blood vessels establish, maintain, or lose mechanical homeostasis. Recent advances based on panoramic digital image correlation (pDIC) have made high-fidelity 3D reconstructions of small-animal (e.g., murine) vessels possible when imaged in a variety of quasi-statically loaded configurations. While we have previously developed and validated inverse modeling approaches to translate pDIC-measured surface deformations into biomechanical metrics of interest, our workflow did not heretofore include a methodology to quantify uncertainties associated with local point estimates of mechanical properties. This limitation has compromised our ability to infer biomechanical properties on a subject-specific basis, such as whether stiffness differs significantly between multiple material locations on the same vessel or whether stiffness differs significantly between multiple vessels at a corresponding material location. In the present study, we have integrated a novel uncertainty quantification and propagation pipeline within our inverse modeling approach, relying on empirical and analytic Bayesian techniques. To demonstrate the approach, we present illustrative results for the ascending thoracic aorta from three mouse models, quantifying uncertainties in constitutive model parameters as well as circumferential and axial tangent stiffness. Our extended workflow not only allows parameter uncertainties to be systematically reported, but also facilitates both subject-specific and group-level statistical analyses of the mechanics of the vessel wall.


Author(s):  
Jun Guo ◽  
Daniel Segalman

Abstract In the ordinary process of estimating uncertainty in model predictions one usually looks to some set of calibration experiments from which the model can be parameterized and then the resulting discrete set of model parameters are used to approximate the joint probability distribution of parameter vectors. That parameter uncertainty is propagated through the model to obtain predictive uncertainty. A key observation here is that usually, the modeler will attempt to find a unique “best” vector of parameters to match each calibration experiment and these “best” parameter vectors are used to estimate parameter uncertainty. In the work presented here, it is shown how for complex models — having more than a few parameters — it can happen that each experiment can befit equally well by a multitude of parameter vectors. It is also shown that when these large numbers of candidate parameter vectors are compiled the resulting model predictions may manifest substantially more variance than would be the case without consideration of the non-uniqueness issue. The contribution of non-uniqueness to prediction uncertainty is illustrated on two very different sorts of model. In the first case Johnson-Cook models for a titanium alloy are parameterized to match calibration experiments on three different alloy samples at different temperatures and strain rates. The resulting ensemble of parameter vectors are used to predict peak stress in a different experiment. In the second case, an epidemiological model is calibrated to history data and the parameter vectors are used to calculate a quantity of interest and uncertainty of that quantity.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
M. Khaki ◽  
H.-J. Hendricks Franssen ◽  
S. C. Han

Abstract Satellite remote sensing offers valuable tools to study Earth and hydrological processes and improve land surface models. This is essential to improve the quality of model predictions, which are affected by various factors such as erroneous input data, the uncertainty of model forcings, and parameter uncertainties. Abundant datasets from multi-mission satellite remote sensing during recent years have provided an opportunity to improve not only the model estimates but also model parameters through a parameter estimation process. This study utilises multiple datasets from satellite remote sensing including soil moisture from Soil Moisture and Ocean Salinity Mission and Advanced Microwave Scanning Radiometer Earth Observing System, terrestrial water storage from the Gravity Recovery And Climate Experiment, and leaf area index from Advanced Very-High-Resolution Radiometer to estimate model parameters. This is done using the recently proposed assimilation method, unsupervised weak constrained ensemble Kalman filter (UWCEnKF). UWCEnKF applies a dual scheme to separately update the state and parameters using two interactive EnKF filters followed by a water balance constraint enforcement. The performance of multivariate data assimilation is evaluated against various independent data over different time periods over two different basins including the Murray–Darling and Mississippi basins. Results indicate that simultaneous assimilation of multiple satellite products combined with parameter estimation strongly improves model predictions compared with single satellite products and/or state estimation alone. This improvement is achieved not only during the parameter estimation period ($$\sim $$ ∼  32% groundwater RMSE reduction and soil moisture correlation increase from $$\sim $$ ∼  0.66 to $$\sim $$ ∼  0.85) but also during the forecast period ($$\sim $$ ∼  14% groundwater RMSE reduction and soil moisture correlation increase from $$\sim $$ ∼  0.69 to $$\sim $$ ∼  0.78) due to the effective impacts of the approach on both state and parameters.


2018 ◽  
Author(s):  
Aynom T. Tweldebrahn ◽  
John F. Burkhart ◽  
Thomas V. Schuler

Abstract. Parameter uncertainty estimation is one of the major challenges in hydrological modelling. Here we present parameter uncertainty analysis of a recently released distributed conceptual hydrological model applied in the Nea catchment, Norway. Two variants of the generalized likelihood uncertainty estimation (GLUE) methodologies, one based on the residuals and the other on the limits of acceptability, were employed. Streamflow and remote sensing snow cover data were used in conditioning model parameters and in model validation. When using the GLUE limit of acceptability (GLUE LOA) approach, a streamflow observation error of 25 % was assumed. Neither the original limits, nor relaxing the limits up to a physically meaningful value, yielded a behavioural model capable of predicting streamflow within the limits in 100 % of the observations. As an alternative to relaxing the limits; the requirement for percentage of model predictions falling within the original limits was relaxed. An empirical approach was introduced to define the degree of relaxation. The result shows that snow and water balance related parameters induce relatively higher streamflow uncertainty than catchment response parameters. Comparable results were obtained from behavioural models selected using the two GLUE methodologies.


2018 ◽  
Author(s):  
Simen Tennøe ◽  
Geir Halnes ◽  
Gaute T. Einevoll

AbstractComputational models in neuroscience typically contain many parameters that are poorly constrained by experimental data. Uncertainty quantification and sensitivity analysis provide rigorous procedures to quantify how the model output depends on this parameter uncertainty. Unfortunately, the application of such methods is not yet standard within the field of neuroscience.Here we present Uncertainpy, an open-source Python toolbox, tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models. Uncertainpy aims to make it easy and quick to get started with uncertainty analysis, without any need for detailed prior knowledge. The toolbox allows uncertainty quantification and sensitivity analysis to be performed on already existing models without needing to modify the model equations or model implementation. Uncertainpy bases its analysis on polynomial chaos expansions, which are more efficient than the more standard Monte-Carlo based approaches.Uncertainpy is tailored for neuroscience applications by its built-in capability for calculating characteristic features in the model output. The toolbox does not merely perform a point-to- point comparison of the “raw” model output (e.g. membrane voltage traces), but can also calculate the uncertainty and sensitivity of salient model response features such as spike timing, action potential width, mean interspike interval, and other features relevant for various neural and neural network models. Uncertainpy comes with several common models and features built in, and including custom models and new features is easy.The aim of the current paper is to present Uncertainpy for the neuroscience community in a user- oriented manner. To demonstrate its broad applicability, we perform an uncertainty quantification and sensitivity analysis on three case studies relevant for neuroscience: the original Hodgkin-Huxley point-neuron model for action potential generation, a multi-compartmental model of a thalamic interneuron implemented in the NEURON simulator, and a sparsely connected recurrent network model implemented in the NEST simulator.SIGNIFICANCE STATEMENTA major challenge in computational neuroscience is to specify the often large number of parameters that define the neuron and neural network models. Many of these parameters have an inherent variability, and some may even be actively regulated and change with time. It is important to know how the uncertainty in model parameters affects the model predictions. To address this need we here present Uncertainpy, an open-source Python toolbox tailored to perform uncertainty quantification and sensitivity analysis of neuroscience models.


2021 ◽  
Vol 247 ◽  
pp. 20005
Author(s):  
Dan G. Cacuci

This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.


Sign in / Sign up

Export Citation Format

Share Document