Weakly supervised training for parsing Mandarin broadcast transcripts

Author(s):  
Wen Wang
2019 ◽  
Author(s):  
Robin Liu ◽  
Lu Wang ◽  
Jim He ◽  
Wenfang Chen

AbstractThis paper introduces a detection-based framework to segment glomeruli from digital scanning image of light microscopic slide of renal biopsy specimens. The proposed method aims to better use the precise localization ability of Faster R-CNN and powerful segmentation ability of U-Net. We use a detector to localize the glomeruli from whole slide image to make the segmentation only focus on the most relevant area of the image. We explored the effectiveness of the network depth on its localization and segmentation ability in glomerular classification, and then propose to use the classification network with enhanced ability of localization and segmentation to construct and initialize a segmentation network. We also propose a weakly supervised training strategy to train the segmentation network by taking advantage of the unique morphology of the glomerulus. Both strong initialization and weakly supervised training are used to resolve the problem of insufficient and inaccurate data annotations and enhance the adaptability of the segmentation network. Experimental results demonstrate that the proposed framework is effective and robust.


Author(s):  
Jayant Krishnamurthy ◽  
Thomas Kollar

This paper introduces Logical Semantics with Perception (LSP), a model for grounded language acquisition that learns to map natural language statements to their referents in a physical environment. For example, given an image, LSP can map the statement “blue mug on the table” to the set of image segments showing blue mugs on tables. LSP learns physical representations for both categorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of entire statements. We further introduce a weakly supervised training procedure that estimates LSP’s parameters using annotated referents for entire statements, without annotated referents for individual words or the parse structure of the statement. We perform experiments on two applications: scene understanding and geographical question answering. We find that LSP outperforms existing, less expressive models that cannot represent relational language. We further find that weakly supervised training is competitive with fully supervised training while requiring significantly less annotation effort.


2020 ◽  
Vol 34 (05) ◽  
pp. 8536-8543
Author(s):  
Ansong Ni ◽  
Pengcheng Yin ◽  
Graham Neubig

A semantic parser maps natural language commands (NLs) from the users to executable meaning representations (MRs), which are later executed in certain environment to obtain user-desired results. The fully-supervised training of such parser requires NL/MR pairs, annotated by domain experts, which makes them expensive to collect. However, weakly-supervised semantic parsers are learnt only from pairs of NL and expected execution results, leaving the MRs latent. While weak supervision is cheaper to acquire, learning from this input poses difficulties. It demands that parsers search a large space with a very weak learning signal and it is hard to avoid spurious MRs that achieve the correct answer in the wrong way. These factors lead to a performance gap between parsers trained in weakly- and fully-supervised setting. To bridge this gap, we examine the intersection between weak supervision and active learning, which allows the learner to actively select examples and query for manual annotations as extra supervision to improve the model trained under weak supervision. We study different active learning heuristics for selecting examples to query, and various forms of extra supervision for such queries. We evaluate the effectiveness of our method on two different datasets. Experiments on the WikiSQL show that by annotating only 1.8% of examples, we improve over a state-of-the-art weakly-supervised baseline by 6.4%, achieving an accuracy of 79.0%, which is only 1.3% away from the model trained with full supervision. Experiments on WikiTableQuestions with human annotators show that our method can improve the performance with only 100 active queries, especially for weakly-supervised parsers learnt from a cold start. 1


Sign in / Sign up

Export Citation Format

Share Document