scholarly journals Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification

Algorithms ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 196
Author(s):  
Luc Bonnet ◽  
Jean-Luc Akian ◽  
Éric Savin ◽  
T. Sullivan

Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.

2021 ◽  
Author(s):  
Y. Curtis Wang ◽  
Nirvik Sinha ◽  
Johann Rudi ◽  
James Velasco ◽  
Gideon Idumah ◽  
...  

Experimental data-based parameter search for Hodgkin-Huxley-style (HH) neuron models is a major challenge for neuroscientists and neuroengineers. Current search strategies are often computationally expensive, are slow to converge, have difficulty handling nonlinearities or multimodalities in the objective function, or require good initial parameter guesses. Most important, many existing approaches lack quantification of uncertainties in parameter estimates even though such uncertainties are of immense biological significance. We propose a novel method for parameter inference and uncertainty quantification in a Bayesian framework using the Markov chain Monte Carlo (MCMC) approach. This approach incorporates prior knowledge about model parameters (as probability distributions) and aims to map the prior to a posterior distribution of parameters informed by both the model and the data. Furthermore, using the adaptive parallel tempering strategy for MCMC, we tackle the highly nonlinear, noisy, and multimodal loss function, which depends on the HH neuron model. We tested the robustness of our approach using the voltage trace data generated from a 9-parameter HH model using five levels of injected currents (0.0, 0.1, 0.2, 0.3, and 0.4 nA). Each test consisted of running the ground truth with its respective currents to estimate the model parameters. To simulate the condition for fitting a frequency-current (F-I) curve, we also introduced an aggregate objective that runs MCMC against all five levels simultaneously. We found that MCMC was able to produce many solutions with acceptable loss values (e.g., for 0.0 nA, 889 solutions were within 0.5% of the best solution and 1,595 solutions within 1% of the best solution). Thus, an adaptive parallel tempering MCMC search provides a "landscape" of the possible parameter sets with acceptable loss values in a tractable manner. Our approach is able to obtain an intelligently sampled global view of the solution distributions within a search range in a single computation. Additionally, the advantage of uncertainty quantification allows for exploration of further solution spaces, which can serve to better inform future experiments.


2020 ◽  
Vol 499 (4) ◽  
pp. 5641-5652
Author(s):  
Georgios Vernardos ◽  
Grigorios Tsagkatakis ◽  
Yannis Pantazis

ABSTRACT Gravitational lensing is a powerful tool for constraining substructure in the mass distribution of galaxies, be it from the presence of dark matter sub-haloes or due to physical mechanisms affecting the baryons throughout galaxy evolution. Such substructure is hard to model and is either ignored by traditional, smooth modelling, approaches, or treated as well-localized massive perturbers. In this work, we propose a deep learning approach to quantify the statistical properties of such perturbations directly from images, where only the extended lensed source features within a mask are considered, without the need of any lens modelling. Our training data consist of mock lensed images assuming perturbing Gaussian Random Fields permeating the smooth overall lens potential, and, for the first time, using images of real galaxies as the lensed source. We employ a novel deep neural network that can handle arbitrary uncertainty intervals associated with the training data set labels as input, provides probability distributions as output, and adopts a composite loss function. The method succeeds not only in accurately estimating the actual parameter values, but also reduces the predicted confidence intervals by 10 per cent in an unsupervised manner, i.e. without having access to the actual ground truth values. Our results are invariant to the inherent degeneracy between mass perturbations in the lens and complex brightness profiles for the source. Hence, we can quantitatively and robustly quantify the smoothness of the mass density of thousands of lenses, including confidence intervals, and provide a consistent ranking for follow-up science.


2011 ◽  
Vol 20 (08) ◽  
pp. 1571-1589 ◽  
Author(s):  
K. H. TSENG ◽  
J. S. H. TSAI ◽  
C. Y. LU

This paper deals with the problem of globally delay-dependent robust stabilization for Takagi–Sugeno (T–S) fuzzy neural network with time delays and uncertain parameters. The time delays comprise discrete and distributed interval time-varying delays and the uncertain parameters are norm-bounded. Based on Lyapunov–Krasovskii functional approach and linear matrix inequality technique, delay-dependent sufficient conditions are derived for ensuring the exponential stability for the closed-loop fuzzy control system. An important feature of the result is that all the stability conditions are dependent on the upper and lower bounds of the delays, which is made possible by using the proposed techniques for achieving delay dependence. Another feature of the results lies in that involves fewer matrix variables. Two illustrative examples are exploited in order to illustrate the effectiveness of the proposed design methods.


Author(s):  
Anna Louise D. Latour ◽  
Behrouz Babaki ◽  
Siegfried Nijssen

A number of data mining problems on probabilistic networks can be modeled as Stochastic Constraint Optimization and Satisfaction Problems, i.e., problems that involve objectives or constraints with a stochastic component. Earlier methods for solving these problems used Ordered Binary Decision Diagrams (OBDDs) to represent constraints on probability distributions, which were decomposed into sets of smaller constraints and solved by Constraint Programming (CP) or Mixed Integer Programming (MIP) solvers. For the specific case of monotonic distributions, we propose an alternative method: a new propagator for a global OBDD-based constraint. We show that this propagator is (sub-)linear in the size of the OBDD, and maintains domain consistency. We experimentally evaluate the effectiveness of this global constraint in comparison to existing decomposition-based approaches, and show how this propagator can be used in combination with another data mining specific constraint present in CP systems. As test cases we use problems from the data mining literature.


Author(s):  
Djamalddine Boumezerane

Abstract In this study, we use possibility distribution as a basis for parameter uncertainty quantification in one-dimensional consolidation problems. A Possibility distribution is the one-point coverage function of a random set and viewed as containing both partial ignorance and uncertainty. Vagueness and scarcity of information needed for characterizing the coefficient of consolidation in clay can be handled using possibility distributions. Possibility distributions can be constructed from existing data, or based on transformation of probability distributions. An attempt is made to set a systematic approach for estimating uncertainty propagation during the consolidation process. The measure of uncertainty is based on Klir's definition (1995). We make comparisons with results obtained from other approaches (probabilistic…) and discuss the importance of using possibility distributions in this type of problems.


2014 ◽  
Vol 51 (3) ◽  
pp. 769-779
Author(s):  
Fabio Lopes

Suppose that red and blue points occur in Rd according to two simple point processes with finite intensities λR and λB, respectively. Furthermore, let ν and μ be two probability distributions on the strictly positive integers with means ν̅ and μ̅, respectively. Assign independently a random number of stubs (half-edges) to each red (blue) point with law ν (μ). We are interested in translation-invariant schemes for matching stubs between points of different colors in order to obtain random bipartite graphs in which each point has a prescribed degree distribution with law ν or μ depending on its color. For a large class of point processes, we show that such translation-invariant schemes matching almost surely all stubs are possible if and only if λRν̅ = λBμ̅, including the case when ν̅ = μ̅ = ∞ so that both sides are infinite. Furthermore, we study a particular scheme based on the Gale-Shapley stable marriage problem. For this scheme, we give sufficient conditions on ν and μ for the presence and absence of infinite components. These results are two-color versions of those obtained by Deijfen, Holroyd and Häggström.


Sign in / Sign up

Export Citation Format

Share Document