scholarly journals Fast coarse-to-fine video retrieval using shot-level spatio-temporal statistics

2006 ◽  
Vol 16 (5) ◽  
pp. 642-648 ◽  
Author(s):  
Yu-Hsuan Ho ◽  
C.-W. Lin ◽  
Jing-Fung Chen ◽  
H.-Y.M. Liao
2009 ◽  
Vol 42 (2) ◽  
pp. 267-282 ◽  
Author(s):  
W. Ren ◽  
S. Singh ◽  
M. Singh ◽  
Y.S. Zhu

2020 ◽  
Vol 34 (07) ◽  
pp. 12886-12893
Author(s):  
Xiao-Yu Zhang ◽  
Haichao Shi ◽  
Changsheng Li ◽  
Peng Li

Weakly supervised action recognition and localization for untrimmed videos is a challenging problem with extensive applications. The overwhelming irrelevant background contents in untrimmed videos severely hamper effective identification of actions of interest. In this paper, we propose a novel multi-instance multi-label modeling network based on spatio-temporal pre-trimming to recognize actions and locate corresponding frames in untrimmed videos. Motivated by the fact that person is the key factor in a human action, we spatially and temporally segment each untrimmed video into person-centric clips with pose estimation and tracking techniques. Given the bag-of-instances structure associated with video-level labels, action recognition is naturally formulated as a multi-instance multi-label learning problem. The network is optimized iteratively with selective coarse-to-fine pre-trimming based on instance-label activation. After convergence, temporal localization is further achieved with local-global temporal class activation map. Extensive experiments are conducted on two benchmark datasets, i.e. THUMOS14 and ActivityNet1.3, and experimental results clearly corroborate the efficacy of our method when compared with the state-of-the-arts.


Author(s):  
Yu-Hsuan Ho ◽  
Chia-Wen Lin ◽  
Jing-Fung Chen ◽  
Hong-Yuan M. Liao

Sign in / Sign up

Export Citation Format

Share Document