Joint Sparsity-Based Robust Visual Tracking

Seyed Bozorgtabar, Roland GOECKE

    Research output: A Conference proceeding or a Chapter in BookConference contribution

    2 Downloads (Pure)

    Abstract

    In this paper, a new object tracking in a particle filter framework utilising a joint sparsity-based model is proposed. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.
    Original languageEnglish
    Title of host publication2014 IEEE International Conference on Image Processing (ICIP)
    Editors Pesquet-Popescu, Fowler
    Place of PublicationParis, France
    PublisherIEEE, Institute of Electrical and Electronics Engineers
    Pages4927-4931
    Number of pages5
    ISBN (Electronic)9781479957514
    DOIs
    Publication statusPublished - 2014
    Event2014 IEEE International Conference on Image Processing - Paris, Paris, France
    Duration: 27 Oct 201430 Oct 2014

    Conference

    Conference2014 IEEE International Conference on Image Processing
    CountryFrance
    CityParis
    Period27/10/1430/10/14

    Fingerprint

    Conjugate gradient method
    Glossaries
    Lighting
    Experiments

    Cite this

    Bozorgtabar, S., & GOECKE, R. (2014). Joint Sparsity-Based Robust Visual Tracking. In Pesquet-Popescu, & Fowler (Eds.), 2014 IEEE International Conference on Image Processing (ICIP) (pp. 4927-4931). Paris, France: IEEE, Institute of Electrical and Electronics Engineers. https://doi.org/10.1109/ICIP.2014.7025998
    Bozorgtabar, Seyed ; GOECKE, Roland. / Joint Sparsity-Based Robust Visual Tracking. 2014 IEEE International Conference on Image Processing (ICIP). editor / Pesquet-Popescu ; Fowler. Paris, France : IEEE, Institute of Electrical and Electronics Engineers, 2014. pp. 4927-4931
    @inproceedings{db760d087a6745ad8888a7795e57210f,
    title = "Joint Sparsity-Based Robust Visual Tracking",
    abstract = "In this paper, a new object tracking in a particle filter framework utilising a joint sparsity-based model is proposed. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.",
    keywords = "Visual Tracking, Sparsity, Preconditioned Conjugate Gradient",
    author = "Seyed Bozorgtabar and Roland GOECKE",
    year = "2014",
    doi = "10.1109/ICIP.2014.7025998",
    language = "English",
    pages = "4927--4931",
    editor = "Pesquet-Popescu and Fowler",
    booktitle = "2014 IEEE International Conference on Image Processing (ICIP)",
    publisher = "IEEE, Institute of Electrical and Electronics Engineers",
    address = "United States",

    }

    Bozorgtabar, S & GOECKE, R 2014, Joint Sparsity-Based Robust Visual Tracking. in Pesquet-Popescu & Fowler (eds), 2014 IEEE International Conference on Image Processing (ICIP). IEEE, Institute of Electrical and Electronics Engineers, Paris, France, pp. 4927-4931, 2014 IEEE International Conference on Image Processing, Paris, France, 27/10/14. https://doi.org/10.1109/ICIP.2014.7025998

    Joint Sparsity-Based Robust Visual Tracking. / Bozorgtabar, Seyed; GOECKE, Roland.

    2014 IEEE International Conference on Image Processing (ICIP). ed. / Pesquet-Popescu; Fowler. Paris, France : IEEE, Institute of Electrical and Electronics Engineers, 2014. p. 4927-4931.

    Research output: A Conference proceeding or a Chapter in BookConference contribution

    TY - GEN

    T1 - Joint Sparsity-Based Robust Visual Tracking

    AU - Bozorgtabar, Seyed

    AU - GOECKE, Roland

    PY - 2014

    Y1 - 2014

    N2 - In this paper, a new object tracking in a particle filter framework utilising a joint sparsity-based model is proposed. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.

    AB - In this paper, a new object tracking in a particle filter framework utilising a joint sparsity-based model is proposed. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.

    KW - Visual Tracking

    KW - Sparsity

    KW - Preconditioned Conjugate Gradient

    U2 - 10.1109/ICIP.2014.7025998

    DO - 10.1109/ICIP.2014.7025998

    M3 - Conference contribution

    SP - 4927

    EP - 4931

    BT - 2014 IEEE International Conference on Image Processing (ICIP)

    A2 - Pesquet-Popescu, null

    A2 - Fowler, null

    PB - IEEE, Institute of Electrical and Electronics Engineers

    CY - Paris, France

    ER -

    Bozorgtabar S, GOECKE R. Joint Sparsity-Based Robust Visual Tracking. In Pesquet-Popescu, Fowler, editors, 2014 IEEE International Conference on Image Processing (ICIP). Paris, France: IEEE, Institute of Electrical and Electronics Engineers. 2014. p. 4927-4931 https://doi.org/10.1109/ICIP.2014.7025998