interval neural network
Recently Published Documents


TOTAL DOCUMENTS

6
(FIVE YEARS 2)

H-INDEX

1
(FIVE YEARS 0)

Author(s):  
Luis Oala ◽  
Cosmas Heiß ◽  
Jan Macdonald ◽  
Maximilian März ◽  
Gitta Kutyniok ◽  
...  

Abstract Purpose The quantitative detection of failure modes is important for making deep neural networks reliable and usable at scale. We consider three examples for common failure modes in image reconstruction and demonstrate the potential of uncertainty quantification as a fine-grained alarm system. Methods We propose a deterministic, modular and lightweight approach called Interval Neural Network (INN) that produces fast and easy to interpret uncertainty scores for deep neural networks. Importantly, INNs can be constructed post hoc for already trained prediction networks. We compare it against state-of-the-art baseline methods (MCDrop, ProbOut). Results We demonstrate on controlled, synthetic inverse problems the capacity of INNs to capture uncertainty due to noise as well as directional error information. On a real-world inverse problem with human CT scans, we can show that INNs produce uncertainty scores which improve the detection of all considered failure modes compared to the baseline methods. Conclusion Interval Neural Networks offer a promising tool to expose weaknesses of deep image reconstruction models and ultimately make them more reliable. The fact that they can be applied post hoc to equip already trained deep neural network models with uncertainty scores makes them particularly interesting for deployment.


2020 ◽  
Vol 4 (OOPSLA) ◽  
pp. 1-27
Author(s):  
Yu Wang ◽  
Ke Wang ◽  
Fengjuan Gao ◽  
Linzhang Wang

2016 ◽  
Vol 29 (8) ◽  
pp. 311-318 ◽  
Author(s):  
Li-fen Yang ◽  
Chong Liu ◽  
Hao Long ◽  
Rana Aamir Raza Ashfaq ◽  
Yu-lin He

2012 ◽  
Vol 2012 ◽  
pp. 1-25 ◽  
Author(s):  
Dakun Yang ◽  
Wei Wu

In many applications, it is natural to use interval data to describe various kinds of uncertainties. This paper is concerned with an interval neural network with a hidden layer. For the original interval neural network, it might cause oscillation in the learning procedure as indicated in our numerical experiments. In this paper, a smoothing interval neural network is proposed to prevent the weights oscillation during the learning procedure. Here, by smoothing we mean that, in a neighborhood of the origin, we replace the absolute values of the weights by a smooth function of the weights in the hidden layer and output layer. The convergence of a gradient algorithm for training the smoothing interval neural network is proved. Supporting numerical experiments are provided.


Sign in / Sign up

Export Citation Format

Share Document