Evaluating multi-task learning for multi-view head-pose classification in interactive environments

Yan Yan, Ramanathan Subramanian, Elisa Ricci, Oswald Lanz, Nicu Sebe

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

11 Citations (Scopus)

Abstract

Social attention behavior offers vital cues towards inferring one's personality traits from interactive settings such as round-table meetings and cocktail parties. Head orientation is typically employed as a proxy for determining the social attention direction when faces are captured at low-resolution. Recently, multi-task learning has been proposed to robustly compute head pose under perspective and scale-based facial appearance variations when multiple, distant and large field-of-view cameras are employed for visual analysis in smart-room applications. In this paper, we evaluate the effectiveness of an SVM-based MTL (SVM+MTL) framework with various facial descriptors (KL, HOG, LBP, etc.). The KL+HOG feature combination is found to produce the best classification performance, with SVM+MTL outperforming classical SVM irrespective of the feature used.

Original languageEnglish
Title of host publicationProceedings - International Conference on Pattern Recognition
EditorsAnders Heyden, Denis Laurendeau, Michael Felsberg
Place of PublicationUnited States
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages4182-4187
Number of pages6
ISBN (Electronic)9781479952083
DOIs
Publication statusPublished - 4 Dec 2014
Externally publishedYes
Event22nd International Conference on Pattern Recognition, ICPR 2014 - Stockholm, Sweden
Duration: 24 Aug 201428 Aug 2014

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651

Conference

Conference22nd International Conference on Pattern Recognition, ICPR 2014
Country/TerritorySweden
CityStockholm
Period24/08/1428/08/14

Fingerprint

Dive into the research topics of 'Evaluating multi-task learning for multi-view head-pose classification in interactive environments'. Together they form a unique fingerprint.

Cite this