A Multi-Task Learning Framework for Head Pose Estimation under Target Motion

Yan Yan, Elisa Ricci, Ramanathan Subramanian, Gaowen Liu, Oswald Lanz, Nicu Sebe

Research output: Contribution to journalArticlepeer-review

111 Citations (Scopus)

Abstract

Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the target's position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the target's walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings.

Original languageEnglish
Article number7254213
Pages (from-to)1070-1083
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume38
Issue number6
DOIs
Publication statusPublished - 1 Jun 2016
Externally publishedYes

Fingerprint

Dive into the research topics of 'A Multi-Task Learning Framework for Head Pose Estimation under Target Motion'. Together they form a unique fingerprint.

Cite this