Robust visual tracking via rank-constrained sparse learning

Seyed Bozorgtabar, Roland GOECKE

    Research output: A Conference proceeding or a Chapter in BookConference contribution

    Abstract

    In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.
    Original languageEnglish
    Title of host publication2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014)
    Editors Bouzerdoum, Wang, Ogunbona, Li, Phung
    Place of PublicationWollongong
    PublisherIEEE
    Pages1-7
    Number of pages7
    ISBN (Electronic)9781479954094
    ISBN (Print)9781479954100
    DOIs
    Publication statusPublished - 25 Nov 2014
    Event2014 International Conference on Digital Image Computing, Techniques and Applications - Wollongong, Wollongong, Australia
    Duration: 25 Nov 201427 Nov 2014

    Conference

    Conference2014 International Conference on Digital Image Computing, Techniques and Applications
    Abbreviated titleDICTA 2014
    CountryAustralia
    CityWollongong
    Period25/11/1427/11/14

    Fingerprint

    Glossaries
    Feature extraction
    Computational complexity

    Cite this

    Bozorgtabar, S., & GOECKE, R. (2014). Robust visual tracking via rank-constrained sparse learning. In Bouzerdoum, Wang, Ogunbona, Li, & Phung (Eds.), 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014) (pp. 1-7). Wollongong: IEEE. https://doi.org/10.1109/DICTA.2014.7008129
    Bozorgtabar, Seyed ; GOECKE, Roland. / Robust visual tracking via rank-constrained sparse learning. 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014). editor / Bouzerdoum ; Wang ; Ogunbona ; Li ; Phung. Wollongong : IEEE, 2014. pp. 1-7
    @inproceedings{0be2f5dd9aca40b898ecd86a98b31354,
    title = "Robust visual tracking via rank-constrained sparse learning",
    abstract = "In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.",
    keywords = "Visual Tracking, sparse learning, Particle filter",
    author = "Seyed Bozorgtabar and Roland GOECKE",
    year = "2014",
    month = "11",
    day = "25",
    doi = "10.1109/DICTA.2014.7008129",
    language = "English",
    isbn = "9781479954100",
    pages = "1--7",
    editor = "Bouzerdoum and Wang and Ogunbona and Li and Phung",
    booktitle = "2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014)",
    publisher = "IEEE",

    }

    Bozorgtabar, S & GOECKE, R 2014, Robust visual tracking via rank-constrained sparse learning. in Bouzerdoum, Wang, Ogunbona, Li & Phung (eds), 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014). IEEE, Wollongong, pp. 1-7, 2014 International Conference on Digital Image Computing, Techniques and Applications, Wollongong, Australia, 25/11/14. https://doi.org/10.1109/DICTA.2014.7008129

    Robust visual tracking via rank-constrained sparse learning. / Bozorgtabar, Seyed; GOECKE, Roland.

    2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014). ed. / Bouzerdoum; Wang; Ogunbona; Li; Phung. Wollongong : IEEE, 2014. p. 1-7.

    Research output: A Conference proceeding or a Chapter in BookConference contribution

    TY - GEN

    T1 - Robust visual tracking via rank-constrained sparse learning

    AU - Bozorgtabar, Seyed

    AU - GOECKE, Roland

    PY - 2014/11/25

    Y1 - 2014/11/25

    N2 - In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.

    AB - In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.

    KW - Visual Tracking

    KW - sparse learning

    KW - Particle filter

    U2 - 10.1109/DICTA.2014.7008129

    DO - 10.1109/DICTA.2014.7008129

    M3 - Conference contribution

    SN - 9781479954100

    SP - 1

    EP - 7

    BT - 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014)

    A2 - Bouzerdoum, null

    A2 - Wang, null

    A2 - Ogunbona, null

    A2 - Li, null

    A2 - Phung, null

    PB - IEEE

    CY - Wollongong

    ER -

    Bozorgtabar S, GOECKE R. Robust visual tracking via rank-constrained sparse learning. In Bouzerdoum, Wang, Ogunbona, Li, Phung, editors, 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2014). Wollongong: IEEE. 2014. p. 1-7 https://doi.org/10.1109/DICTA.2014.7008129