TY - GEN
T1 - Joint Sparsity-Based Robust Visual Tracking
AU - Bozorgtabar, Seyed
AU - GOECKE, Roland
PY - 2014
Y1 - 2014
N2 - In this paper, a new object tracking in a particle filter framework utilising a joint sparsity-based model is proposed. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.
AB - In this paper, a new object tracking in a particle filter framework utilising a joint sparsity-based model is proposed. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.
KW - Visual Tracking
KW - Sparsity
KW - Preconditioned Conjugate Gradient
UR - http://www.scopus.com/inward/record.url?scp=84983134682&partnerID=8YFLogxK
UR - http://www.mendeley.com/research/joint-sparsitybased-robust-visual-tracking
U2 - 10.1109/ICIP.2014.7025998
DO - 10.1109/ICIP.2014.7025998
M3 - Conference contribution
SN - 9781479957514
T3 - 2014 IEEE International Conference on Image Processing, ICIP 2014
SP - 4927
EP - 4931
BT - 2014 IEEE International Conference on Image Processing (ICIP)
A2 - Pesquet-Popescu, null
A2 - Fowler, null
PB - IEEE, Institute of Electrical and Electronics Engineers
CY - Paris, France
T2 - IEEE International Conference on Image Processing 2014
Y2 - 27 October 2014 through 30 October 2014
ER -