label semantics
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 6)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Qian-Wen Zhang ◽  
Ximing Zhang ◽  
Zhao Yan ◽  
Ruifang Liu ◽  
Yunbo Cao ◽  
...  

Multi-label text classification is an essential task in natural language processing. Existing multi-label classification models generally consider labels as categorical variables and ignore the exploitation of label semantics. In this paper, we view the task as a correlation-guided text representation problem: an attention-based two-step framework is proposed to integrate text information and label semantics by jointly learning words and labels in the same space. In this way, we aim to capture high-order label-label correlations as well as context-label correlations. Specifically, the proposed approach works by learning token-level representations of words and labels globally through a multi-layer Transformer and constructing an attention vector through word-label correlation matrix to generate the text representation. It ensures that relevant words receive higher weights than irrelevant words and thus directly optimizes the classification performance. Extensive experiments over benchmark multi-label datasets clearly validate the effectiveness of the proposed approach, and further analysis demonstrates that it is competitive in both predicting low-frequency labels and convergence speed.


2021 ◽  
Vol 2 (3) ◽  
pp. 1-21
Author(s):  
Deke Guo ◽  
Xiaoqiang Teng ◽  
Yulan Guo ◽  
Xiaolei Zhou ◽  
Zhong Liu

Due to the rapid development of indoor location-based services, automatically deriving an indoor semantic floorplan becomes a highly promising technique for ubiquitous applications. To make an indoor semantic floorplan fully practical, it is essential to handle the dynamics of semantic information. Despite several methods proposed for automatic construction and semantic labeling of indoor floorplans, this problem has not been well studied and remains open. In this article, we present a system called SiFi to provide accurate and automatic self-updating service. It updates semantics with instant videos acquired by mobile devices in indoor scenes. First, a crowdsourced-based task model is designed to attract users to contribute semantic-rich videos. Second, we use the maximum likelihood estimation method to solve the text inferring problem as the sequential relationship of texts provides additional geometrical constraints. Finally, we formulate the semantic update as an inference problem to accurately label semantics at correct locations on the indoor floorplans. Extensive experiments have been conducted across 9 weeks in a shopping mall with more than 250 stores. Experimental results show that SiFi achieves 84.5% accuracy of semantic update.


Author(s):  
Ying Gao ◽  
Xiaohan Feng ◽  
Tiange Zhang ◽  
Eric Rigall ◽  
Huiyu Zhou ◽  
...  

Author(s):  
Xitao Zou ◽  
Xinzhi Wang ◽  
Erwin M. Bakker ◽  
Song Wu
Keyword(s):  

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 152183-152192
Author(s):  
Linkun Cai ◽  
Yu Song ◽  
Tao Liu ◽  
Kunli Zhang

2020 ◽  
Author(s):  
Radhika Gaonkar ◽  
Heeyoung Kwon ◽  
Mohaddeseh Bastan ◽  
Niranjan Balasubramanian ◽  
Nathanael Chambers

2019 ◽  
Vol 7 ◽  
pp. 139-155 ◽  
Author(s):  
Nikolaos Pappas ◽  
James Henderson

Neural text classification models typically treat output labels as categorical variables that lack description and semantics. This forces their parametrization to be dependent on the label set size, and, hence, they are unable to scale to large label sets and generalize to unseen ones. Existing joint input-label text models overcome these issues by exploiting label descriptions, but they are unable to capture complex label relationships, have rigid parametrization, and their gains on unseen labels happen often at the expense of weak performance on the labels seen during training. In this paper, we propose a new input-label model that generalizes over previous such models, addresses their limitations, and does not compromise performance on seen labels. The model consists of a joint nonlinear input-label embedding with controllable capacity and a joint-space-dependent classification unit that is trained with cross-entropy loss to optimize classification performance. We evaluate models on full-resource and low- or zero-resource text classification of multilingual news and biomedical text with a large label set. Our model outperforms monolingual and multilingual models that do not leverage label semantics and previous joint input-label space models in both scenarios.


Sign in / Sign up

Export Citation Format

Share Document