scholarly journals Reliable Prediction Errors for Deep Neural Networks Using Test-Time Dropout

2019 ◽  
Vol 59 (7) ◽  
pp. 3330-3339 ◽  
Author(s):  
Isidro Cortés-Ciriano ◽  
Andreas Bender
2020 ◽  
Vol 34 (04) ◽  
pp. 5462-5469
Author(s):  
Goutham Ramakrishnan ◽  
Yun Chan Lee ◽  
Aws Albarghouthi

When a model makes a consequential decision, e.g., denying someone a loan, it needs to additionally generate actionable, realistic feedback on what the person can do to favorably change the decision. We cast this problem through the lens of program synthesis, in which our goal is to synthesize an optimal (realistically cheapest or simplest) sequence of actions that if a person executes successfully can change their classification. We present a novel and general approach that combines search-based program synthesis and test-time adversarial attacks to construct action sequences over a domain-specific set of actions. We demonstrate the effectiveness of our approach on a number of deep neural networks.


2019 ◽  
Vol 10 (36) ◽  
pp. 8438-8446 ◽  
Author(s):  
Seongok Ryu ◽  
Yongchan Kwon ◽  
Woo Youn Kim

Deep neural networks have been increasingly used in various chemical fields. Here, we show that Bayesian inference enables more reliable prediction with quantitative uncertainty analysis.


Author(s):  
Jiaqi Guan ◽  
Yang Liu ◽  
Qiang Liu ◽  
Jian Peng

Deep neural networks have been remarkable successful in various AI tasks but often cast high computation and energy cost for energy-constrained applications such as mobile sensing. We address this problem by proposing a novel framework that optimizes the prediction accuracy and energy cost simultaneously, thus enabling effective cost-accuracy trade-off at test time. In our framework, each data instance is pushed into a cascade of deep neural networks with increasing sizes, and a selection module is used to sequentially determine when a sufficiently accurate classifier can be used for this data instance. The cascade of neural networks and the selection module are jointly trained in an end-to-end fashion by the REINFORCE algorithm to optimize a trade-off between the computational cost and the predictive accuracy. Our method is able to simultaneously improve the accuracy and efficiency by learning to assign easy instances to fast yet sufficiently accurate classifiers to save computation and energy cost, while assigning harder instances to deeper and more powerful classifiers to ensure satisfiable accuracy. Moreover, we demonstrate our method's effectiveness with extensive experiments on CIFAR-10/100, ImageNet32x32 and original ImageNet dataset.


2019 ◽  
Author(s):  
Murat Seçkin Ayhan ◽  
Laura Kühlewein ◽  
Gulnar Aliyeva ◽  
Werner Inhoffen ◽  
Focke Ziemssen ◽  
...  

ABSTRACTDeep learning-based systems can achieve a diagnostic performance comparable to physicians in a variety of medical use cases including the diagnosis of diabetic retinopathy. To be useful in clinical practise, it is necessary to have well calibrated measures of the uncertainty with which these systems report their decisions. However, deep neural networks (DNNs) are being often overconfident in their predictions, and are not amenable to a straightforward probabilistic treatment. Here, we describe an intuitive framework based on test-time data augmentation for quantifying the diagnostic uncertainty of a state-of-the-art DNN for diagnosing diabetic retinopathy. We show that the derived measure of uncertainty is well-calibrated and that experienced physicians likewise find cases with uncertain diagnosis difficult to evaluate. This paves the way for an integrated treatment of uncertainty in DNN-based diagnostic systems.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

2018 ◽  
Author(s):  
Chi Zhang ◽  
Xiaohan Duan ◽  
Ruyuan Zhang ◽  
Li Tong

Sign in / Sign up

Export Citation Format

Share Document