Robust Visual Tracking Using an Effective Appearance Model Based on Sparse Coding

2012 ◽  
Vol 3 (3) ◽  
pp. 1-18 ◽  
Author(s):  
Shengping Zhang ◽  
Hongxun Yao ◽  
Xin Sun ◽  
Shaohui Liu
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yun Liang ◽  
Dong Wang ◽  
Yijin Chen ◽  
Lei Xiao ◽  
Caixing Liu

This paper proposes a new visual tracking method by constructing the robust appearance model of the target with convolutional sparse coding. First, our method uses convolutional sparse coding to divide the interest region of the target into a smooth image and four detail images with different fitting degrees. Second, we compute the initial target region by tracking the smooth image with the kernel correlation filtering. We define an appearance model to describe the details of the target based on the initial target region and the combination of four detail images. Third, we propose a matching method by the overlap rate and Euclidean distance to evaluate candidates and the appearance model to compute the tracking results based on detail images. Finally, the two tracking results are separately computed by the smooth image, and the detail images are combined to produce the final target rectangle. Many experiments on videos from Tracking Benchmark 2015 demonstrate that our method produces much better results than most of the present visual tracking methods.


2014 ◽  
Vol 144 ◽  
pp. 581-595 ◽  
Author(s):  
Jia Yan ◽  
Xi Chen ◽  
Dexiang Deng ◽  
Qiuping Zhu

2021 ◽  
Vol 2021 (29) ◽  
pp. 381-386
Author(s):  
Xu Qiang ◽  
Muhammad Safdar ◽  
Ming Ronnier Luo

Two colour appearance models based UCSs, CAM16-UCS and ZCAM-QMh, were tested using HDR, WCG and COMBVD datasets. As a comparison, two widely used UCSs, CIELAB and ICTCP, were tested. Metrics of the STRESS and correlation coefficient between predicted colour differences and visual differences, together with local and global uniformity based on their chromatic discrimination ellipses, were applied to test models' performance. The two UCSs give similar performance. The luminance parametric factor kL, and power factor γ, were introduced to optimize colour-difference models. Factors kL and γ of 0.75 and 0.5, gave marked improvement to predict the HDR dataset. Factor kL of 0.3 gave significant improvement in the test of WCG dataset. In the test of COMBVD dataset, optimization provide very limited improvement.


Sign in / Sign up

Export Citation Format

Share Document