scholarly journals Robust Visual Tracking with Discrimination Dictionary Learning

2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Yuanyun Wang ◽  
Chengzhi Deng ◽  
Jun Wang ◽  
Wei Tian ◽  
Shengqian Wang

It is a challenging issue to deal with kinds of appearance variations in visual tracking. Existing tracking algorithms build appearance models upon target templates. Those models are not robust to significant appearance variations due to factors such as illumination variations, partial occlusions, and scale variation. In this paper, we propose a robust tracking algorithm with a learnt dictionary to represent target candidates. With the learnt dictionary, a target candidate is represented with a linear combination of dictionary atoms. The discriminative information in learning samples is exploited. In the meantime, the learning processing of dictionaries can learn appearance variations. Based on the learnt dictionary, we can get a more stable representation for target candidates. Additionally, the observation likelihood is evaluated based on both the reconstruct error and dictionary coefficients with l1 constraint. Comprehensive experiments demonstrate the superiority of the proposed tracking algorithm to some state-of-the-art tracking algorithms.

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2137 ◽  
Author(s):  
Chenpu Li ◽  
Qianjian Xing ◽  
Zhenguo Ma

In the field of visual tracking, trackers based on a convolutional neural network (CNN) have had significant achievements. The fully-convolutional Siamese (SiamFC) tracker is a typical representation of these CNN trackers and has attracted much attention. It models visual tracking as a similarity-learning problem. However, experiments showed that SiamFC was not so robust in some complex environments. This may be because the tracker lacked enough prior information about the target. Inspired by the key idea of a Staple tracker and Kalman filter, we constructed two more models to help compensate for SiamFC’s disadvantages. One model contained the target’s prior color information, and the other the target’s prior trajectory information. With these two models, we design a novel and robust tracking framework on the basis of SiamFC. We call it Histogram–Kalman SiamFC (HKSiamFC). We also evaluated HKSiamFC tracker’s performance on dataset of the online object tracking benchmark (OTB) and Temple Color (TC128), and it showed quite competitive performance when compared with the baseline tracker and several other state-of-the-art trackers.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Heng Fan ◽  
Jinhai Xiang ◽  
Jun Xu ◽  
Honghong Liao

We propose a novel part-based tracking algorithm using online weighted P-N learning. An online weighted P-N learning method is implemented via considering the weight of samples during classification, which improves the performance of classifier. We apply weighted P-N learning to track a part-based target model instead of whole target. In doing so, object is segmented into fragments and parts of them are selected as local feature blocks (LFBs). Then, the weighted P-N learning is employed to train classifier for each local feature block (LFB). Each LFB is tracked through the corresponding classifier, respectively. According to the tracking results of LFBs, object can be then located. During tracking process, to solve the issues of occlusion or pose change, we use a substitute strategy to dynamically update the set of LFB, which makes our tracker robust. Experimental results demonstrate that the proposed method outperforms the state-of-the-art trackers.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Xiaoyan Qian ◽  
Daihao Zhang

A robust tracking method is proposed for complex visual sequences. Different from time-consuming offline training in current deep tracking, we design a simple two-layer online learning network which fuses local convolution features and global handcrafted features together to give the robust representation for visual tracking. The target state estimation is modeled by an adaptive Gaussian mixture. The motion information is used to direct the distribution of the candidate samples effectively. And meanwhile, an adaptive scale selection is addressed to avoid bringing extra background information. A corresponding object template model updating procedure is developed to account for possible occlusion and minor change. Our tracking method has a light structure and performs favorably against several state-of-the-art methods in tracking challenging scenarios on the recent tracking benchmark data set.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Li Jia Wang ◽  
Hua Zhang

An improved online multiple instance learning (IMIL) for a visual tracking algorithm is proposed. In the IMIL algorithm, the importance of each instance contributing to a bag probability is with respect to their probabilities. A selection strategy based on an inner product is presented to choose weak classifier from a classifier pool, which avoids computing instance probabilities and bag probabilityMtimes. Furthermore, a feedback strategy is presented to update weak classifiers. In the feedback update strategy, different weights are assigned to the tracking result and template according to the maximum classifier score. Finally, the presented algorithm is compared with other state-of-the-art algorithms. The experimental results demonstrate that the proposed tracking algorithm runs in real-time and is robust to occlusion and appearance changes.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092965
Author(s):  
Li Zhao ◽  
Pengcheng Huang ◽  
Fei Liu ◽  
Hui Huang ◽  
Huiling Chen

Template dictionary construction is an important issue in sparse representation (SP)-based tracking algorithms. In this article, a drift-free visual tracking algorithm is proposed via the construction of an effective template dictionary. The constructed dictionary is composed of three categories of atoms (templates): nonpolluted atoms, variational atoms, and noise atoms. Moreover, the linear combinations of nonpolluted atoms are also added to the dictionary for the diversity of atoms. All the atoms are selectively updated to capture appearance changes and alleviate the model drifting problem. A bidirectional tracking process is used and each process is optimized by two-step SP, which greatly reduces the computational burden. Compared with other related works, the constructed dictionary and tracking algorithm are both robust and efficient.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
An Zhiyong ◽  
Guan Hao ◽  
Li Jinjiang

Object tracking with robust scale estimation is a challenging task in computer vision. This paper presents a novel tracking algorithm that learns the translation and scale filters with a complementary scheme. The translation filter is constructed using the ridge regression and multidimensional features. A robust scale filter is constructed by the bidirectional scale estimation, including the forward scale and backward scale. Firstly, we learn the scale filter using the forward tracking information. Then the forward scale and backward scale can be estimated using the respective scale filter. Secondly, a conservative strategy is adopted to compromise the forward and backward scales. Finally, the scale filter is updated based on the final scale estimation. It is effective to update scale filter since the stable scale estimation can improve the performance of scale filter. To reveal the effectiveness of our tracker, experiments are performed on 32 sequences with significant scale variation and on the benchmark dataset with 50 challenging videos. Our results show that the proposed tracker outperforms several state-of-the-art trackers in terms of robustness and accuracy.


2021 ◽  
Vol 11 (18) ◽  
pp. 8698
Author(s):  
Minghe Cao ◽  
Jianzhong Wang ◽  
Li Ming

While the robotics techniques have not developed to full automation, robot following is common and crucial in robotic applications to reduce the need for dedicated teleoperation. To achieve this task, the target must first be robustly and consistently perceived. In this paper, a robust visual tracking approach is proposed. The approach adopts a scene analysis module (SAM) to identify the real target and similar distractors, leveraging statistical characteristics of cross-correlation responses. Positive templates are collected based on the tracking confidence constructed by the SAM, and negative templates are gathered by the recognized distractors. Based on the collected templates, response fusion is performed. As a result, the responses of the target are enhanced and the false responses are suppressed, leading to robust tracking results. The proposed approach is validated on an outdoor robot-person following dataset and a collection of public person tracking datasets. The results show that our approach achieved state-of-the-art tracking performance in terms of both the robustness and AUC score.


2020 ◽  
Author(s):  
ZengShun Zhao ◽  
Juanjuan Wang ◽  
HaoRan Yang ◽  
Ning Xu ◽  
Chengqin Wu ◽  
...  

Abstract The long-term visual tracking undergoes more challenges and is closer to realistic applications than short-term tracking. However, most existing methods have not been done and their performances have also been limited. In this work, we present a reliable yet simple long-term tracking method, which extends the state-of-the-art Discriminative Correlation Filters (DCF) tracking algorithm with a re-detection component based on the SVM model. The DCF tracking algorithm localizes the target in each frame and the re-detector is able to efficiently re-detect the target in the whole image when the tracking fails. We further introduce a robust confidence degree evaluation criterion that combines the maximum response criterion and the average peak-to correlation energy (APCE) to judge the confidence level of the predicted target. When the confidence degree is generally high, the SVM is updated accordingly. If the confidence drops sharply, the SVM re-detects the target. We perform extensive experiments on the OTB-2015 dataset, the experimental results demonstrate the effectiveness of our algorithm in long-term tracking.


2010 ◽  
Vol 30 (3) ◽  
pp. 643-645 ◽  
Author(s):  
Wei ZENG ◽  
Gui-bin ZHU ◽  
Jie CHEN ◽  
Ding-ding TANG

Sign in / Sign up

Export Citation Format

Share Document