aleatoric uncertainty
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 14)

H-INDEX

4
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Hyung Kwak ◽  
Hussein Hoteit

Abstract Automatic fracture recognition from borehole images or outcrops is applicable for the construction of fractured reservoir models. Deep learning for fracture recognition is subject to uncertainty due to sparse and imbalanced training set, and random initialization. We present a new workflow to optimize a deep learning model under uncertainty using U-Net. We consider both epistemic and aleatoric uncertainty of the model. We propose a U-Net architecture by inserting dropout layer after every "weighting" layer. We vary the dropout probability to investigate its impact on the uncertainty response. We build the training set and assign uniform distribution for each training parameter, such as the number of epochs, batch size, and learning rate. We then perform uncertainty quantification by running the model multiple times for each realization, where we capture the aleatoric response. In this approach, which is based on Monte Carlo Dropout, the variance map and F1-scores are utilized to evaluate the need to craft additional augmentations or stop the process. This work demonstrates the existence of uncertainty within the deep learning caused by sparse and imbalanced training sets. This issue leads to unstable predictions. The overall responses are accommodated in the form of aleatoric uncertainty. Our workflow utilizes the uncertainty response (variance map) as a measure to craft additional augmentations in the training set. High variance in certain features denotes the need to add new augmented images containing the features, either through affine transformation (rotation, translation, and scaling) or utilizing similar images. The augmentation improves the accuracy of the prediction, reduces the variance prediction, and stabilizes the output. Architecture, number of epochs, batch size, and learning rate are optimized under a fixed-uncertain training set. We perform the optimization by searching the global maximum of accuracy after running multiple realizations. Besides the quality of the training set, the learning rate is the heavy-hitter in the optimization process. The selected learning rate controls the diffusion of information in the model. Under the imbalanced condition, fast learning rates cause the model to miss the main features. The other challenge in fracture recognition on a real outcrop is to optimally pick the parental images to generate the initial training set. We suggest picking images from multiple sides of the outcrop, which shows significant variations of the features. This technique is needed to avoid long iteration within the workflow. We introduce a new approach to address the uncertainties associated with the training process and with the physical problem. The proposed approach is general in concept and can be applied to various deep-learning problems in geoscience.


2021 ◽  
Author(s):  
Oliver Mey ◽  
Andre Schneider ◽  
Olaf Enge-Rosenblatt ◽  
Yesnier Bravo ◽  
Pit Stenzel

2021 ◽  
Author(s):  
Craig K Jones ◽  
Guoqing Wang ◽  
Vivek Yedavalli ◽  
Haris Sair

This work shows a derivation of a multinomial probability function and quantitative measures of the data and epistemic uncertainty as direct output of a 3D U-Net segmentation network. A set of T1 brain MRI images were downloaded from the Connectome Project and segmented using FMRIB's FAST algorithm to be used as ground truth. A 3D U-Net neural network was trained with sample sizes of 200, 500, and 898 T1 brain images using a loss function defined as the negative logarithm of the likelihood based on a derivation of the definition of the multinomial probability function. From this definition, the epistemic (model) and aleatoric (data) uncertainty equations were derived and used to quantify maps of the uncertainty in data prediction. The epistemic and aleatoric uncertainty decreased based on the increasing number of training data used to train the neural network. The neural network trained with 898 volumes resulted in uncertainty maps that were high primarily in the tissue boundary regions. The uncertainty was averaged over all test data (connectome and tumor separately) and the epistemic uncertainty showed a decreasing trend, as expected, with increasing numbers of data used to train the model. The aleatoric uncertainty showed a similar trend, but it was less obvious, which was also expected as the aleatoric uncertainty is not expected to be as dependent on the number of training data. The derived data and epistemic uncertainty equations from a multinomial probability distribution are applicable for all 2D and 3D neural networks.


Author(s):  
Ryan-Rhys Griffiths ◽  
Alexander Aldrick ◽  
Miguel Garcia-Ortegon ◽  
Vidhi Lalchand ◽  
Alpha Lee

2021 ◽  
pp. 53-68
Author(s):  
Andrew C. A. Elliott

A discussion of randomness: what it is, and where it comes from. We mention randomness originating from quantum effects, from chaos, and from noise. A distinction is made between epistemic and aleatoric uncertainty. Automatic generation of random numbers is needed in many contexts, an example of which is the device ERNIE used for Premium Bonds. Discussion of the generation and use of pseudo-random numbers.


Author(s):  
Z. Zhong ◽  
M. Mehltretter

Abstract. The ability to identify erroneous depth estimates is of fundamental interest. Information regarding the aleatoric uncertainty of depth estimates can be, for example, used to support the process of depth reconstruction itself. Consequently, various methods for the estimation of aleatoric uncertainty in the context of dense stereo matching have been presented in recent years, with deep learning-based approaches being particularly popular. Among these deep learning-based methods, probabilistic strategies are increasingly attracting interest, because the estimated uncertainty can be quantified in pixels or in metric units due to the consideration of real error distributions. However, existing probabilistic methods usually assume a unimodal distribution to describe the error distribution while simply neglecting cases in real-world scenarios that could violate this assumption. To overcome this limitation, we propose two novel mixed probability models consisting of Laplacian and Uniform distributions for the task of aleatoric uncertainty estimation. In this way, we explicitly address commonly challenging regions in the context of dense stereo matching and outlier measurements, respectively. To allow a fair comparison, we adapt a common neural network architecture to investigate the effects of the different uncertainty models. In an extensive evaluation using two datasets and two common dense stereo matching methods, the proposed methods demonstrate state-of-the-art accuracy.


Water ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1389
Author(s):  
Stanislav Paseka ◽  
Daniel Marton

The topic of uncertainties in water management tasks is a very extensive and highly discussed one. It is generally based on the theory that uncertainties comprise epistemic uncertainty and aleatoric uncertainty. This work deals with the comprehensive determination of the functional water volumes of a reservoir during extreme hydrological events under conditions of aleatoric uncertainty described as input data uncertainties. In this case, the input data uncertainties were constructed using the Monte Carlo method and applied to the data employed in the water management solution of the reservoir: (i) average monthly water inflows, (ii) hydrographs, (iii) bathygraphic curves and (iv) water losses by evaporation and dam seepage. To determine the storage volume of the reservoir, a simulation-optimization model of the reservoir was developed, which uses the balance equation of the reservoir to determine its optimal storage volume. For the second hydrological extreme, a simulation model for the transformation of flood discharges was developed, which works on the principle of the first order of the reservoir differential equation. By linking the two models, it is possible to comprehensively determine the functional volumes of the reservoir in terms of input data uncertainties. The practical application of the models was applied to a case study of the Vír reservoir in the Czech Republic, which fulfils the purpose of water storage and flood protection. The obtained results were analyzed in detail to verify whether the reservoir is sufficiently resistant to current hydrological extremes and also to suggest a redistribution of functional volumes of the reservoir under conditions of measurement uncertainty.


Geophysics ◽  
2021 ◽  
pp. 1-63
Author(s):  
Nam Pham ◽  
Sergey Fomel

We have adopted a method to understand uncertainty and interpretability of a Bayesian convolutional neural network for detecting 3D channel geobodies in seismic volumes. We measure heteroscedastic aleatoric uncertainty and epistemic uncertainty. Epistemic uncertainty captures the uncertainty of the network parameters, whereas heteroscedastic aleatoric uncertainty accounts for noise in the seismic volumes. We train a network modified from U-Net architecture, on 3D synthetic seismic volumes, and then we apply it to field data. Tests on 3D field data sets from the Browse Basin, offshore Australia, and from Parihaka in New Zealand, prove that uncertainty volumes are related to geologic uncertainty, model mispicks, and input noise. We analyze model interpretability on these data sets by creating saliency volumes with gradient-weighted class activation mapping. We find that the model takes a global to local approach to localize channel geobodies as well as the importance of different model components in overall strategy. Using channel probability, uncertainty, and saliency volumes, interpreters can accurately identify channel geobodies in 3D seismic volumes and also understand the model predictions


Sign in / Sign up

Export Citation Format

Share Document