scholarly journals Probability Fusion Decision Framework of Multiple Deep Neural Networks for Fine-Grained Visual Classification

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 122740-122757 ◽  
Author(s):  
Yang-Yang Zheng ◽  
Jian-Lei Kong ◽  
Xue-Bo Jin ◽  
Xiao-Yi Wang ◽  
Ting-Li Su ◽  
...  
Author(s):  
Xiuwen Yi ◽  
Zhewen Duan ◽  
Ruiyuan Li ◽  
Junbo Zhang ◽  
Tianrui Li ◽  
...  

2020 ◽  
Vol 34 (07) ◽  
pp. 12781-12788 ◽  
Author(s):  
Chuanyi Zhang ◽  
Yazhou Yao ◽  
Huafeng Liu ◽  
Guo-Sen Xie ◽  
Xiangbo Shu ◽  
...  

Labeling objects at the subordinate level typically requires expert knowledge, which is not always available from a random annotator. Accordingly, learning directly from web images for fine-grained visual classification (FGVC) has attracted broad attention. However, the existence of noise in web images is a huge obstacle for training robust deep neural networks. In this paper, we propose a novel approach to remove irrelevant samples from the real-world web images during training, and only utilize useful images for updating the networks. Thus, our network can alleviate the harmful effects caused by irrelevant noisy web images to achieve better performance. Extensive experiments on three commonly used fine-grained datasets demonstrate that our approach is much superior to state-of-the-art webly supervised methods. The data and source code of this work have been made anonymously available at: https://github.com/z337-408/WSNFGVC.


2020 ◽  
Vol 102 ◽  
pp. 210-221 ◽  
Author(s):  
Wenbin Jiang ◽  
Yangsong Zhang ◽  
Pai Liu ◽  
Jing Peng ◽  
Laurence T. Yang ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
pp. 268
Author(s):  
Jiali Deng ◽  
Haigang Gong ◽  
Minghui Liu ◽  
Tianshu Xie ◽  
Xuan Cheng ◽  
...  

It has been shown that the learning rate is one of the most critical hyper-parameters for the overall performance of deep neural networks. In this paper, we propose a new method for setting the global learning rate, named random amplify learning rates (RALR), to improve the performance of any optimizer in training deep neural networks. Instead of monotonically decreasing the learning rate, we expect to escape saddle points or local minima by amplifying the learning rate between reasonable boundary values based on a given probability. Training with RALR rather than conventionally decreasing the learning rate achieves further improvement on networks’ performance without extra consumption. Remarkably, the RALR is complementary with state-of-the-art data augmentation and regularization methods. Besides, we empirically study its performance on image classification tasks, fine-grained classification tasks, object detection tasks, and machine translation tasks. Experiments demonstrate that RALR can bring a notable improvement while preventing overfitting when training deep neural networks. For example, the classification accuracy of ResNet-110 trained on the CIFAR-100 dataset using RALR achieves a 1.34% gain compared with ResNet-110 trained traditionally.


Author(s):  
Erzhuo Shao ◽  
Huandong Wang ◽  
Jie Feng ◽  
Tong Xia ◽  
Hedong Yang ◽  
...  

Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

Sign in / Sign up

Export Citation Format

Share Document