scholarly journals Committee neural network potentials control generalization errors and enable active learning

2020 ◽  
Vol 153 (10) ◽  
pp. 104105 ◽  
Author(s):  
Christoph Schran ◽  
Krystof Brezina ◽  
Ondrej Marsalek
2021 ◽  
Vol 7 (2) ◽  
pp. 37
Author(s):  
Isah Charles Saidu ◽  
Lehel Csató

We present a sample-efficient image segmentation method using active learning, we call it Active Bayesian UNet, or AB-UNet. This is a convolutional neural network using batch normalization and max-pool dropout. The Bayesian setup is achieved by exploiting the probabilistic extension of the dropout mechanism, leading to the possibility to use the uncertainty inherently present in the system. We set up our experiments on various medical image datasets and highlight that with a smaller annotation effort our AB-UNet leads to stable training and better generalization. Added to this, we can efficiently choose from an unlabelled dataset.


2022 ◽  
Vol 14 (2) ◽  
pp. 861
Author(s):  
Han-Cheng Dan ◽  
Hao-Fan Zeng ◽  
Zhi-Heng Zhu ◽  
Ge-Wen Bai ◽  
Wei Cao

Image recognition based on deep learning generally demands a huge sample size for training, for which the image labeling becomes inevitably laborious and time-consuming. In the case of evaluating the pavement quality condition, many pavement distress patching images would need manual screening and labeling, meanwhile the subjectivity of the labeling personnel would greatly affect the accuracy of image labeling. In this study, in order for an accurate and efficient recognition of the pavement patching images, an interactive labeling method is proposed based on the U-Net convolutional neural network, using active learning combined with reverse and correction labeling. According to the calculation results in this paper, the sample size required by the interactive labeling is about half of the traditional labeling method for the same recognition precision. Meanwhile, the accuracy of interactive labeling method based on the mean intersection over union (mean_IOU) index is 6% higher than that of the traditional method using the same sample size and training epochs. In addition, the accuracy analysis of the noise and boundary of the prediction results shows that this method eliminates 92% of the noise in the predictions (the proportion of noise is reduced from 13.85% to 1.06%), and the image definition is improved by 14.1% in terms of the boundary gray area ratio. The interactive labeling is considered as a significantly valuable approach, as it reduces the sample size in each epoch of active learning, greatly alleviates the demand for manpower, and improves learning efficiency and accuracy.


Author(s):  
Shaolei Wang ◽  
Zhongyuan Wang ◽  
Wanxiang Che ◽  
Sendong Zhao ◽  
Ting Liu

Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.


Author(s):  
Sandrine I. Herriot ◽  
Brenda Ng ◽  
Wade H. Williams ◽  
Sachin S. Talathi ◽  
Thomas Spinka ◽  
...  

2020 ◽  
Vol 23 (13) ◽  
pp. 2952-2964
Author(s):  
Zhen Wang ◽  
Guoshan Xu ◽  
Yong Ding ◽  
Bin Wu ◽  
Guoyu Lu

Concrete surface crack detection based on computer vision, specifically via a convolutional neural network, has drawn increasing attention for replacing manual visual inspection of bridges and buildings. This article proposes a new framework for this task and a sampling and training method based on active learning to treat class imbalances. In particular, the new framework includes a clear definition of two categories of samples, a relevant sliding window technique, data augmentation and annotation methods. The advantages of this framework are that data integrity can be ensured and a very large amount of annotation work can be saved. Training datasets generated with the proposed sampling and training method not only are representative of the original dataset but also highlight samples that are highly complex, yet informative. Based on the proposed framework and sampling and training strategy, AlexNet is re-tuned, validated, tested and compared with an existing network. The investigation revealed outstanding performances of the proposed framework in terms of the detection accuracy, precision and F1 measure due to its nonlinear learning ability, training dataset integrity and active learning strategy.


Sign in / Sign up

Export Citation Format

Share Document