TY - GEN
T1 - An adaptation framework for head-pose classification in dynamic multi-view scenarios
AU - Rajagopal, Anoop K.
AU - Subramanian, Ramanathan
AU - Vieriu, Radu L.
AU - Ricci, Elisa
AU - Lanz, Oswald
AU - Ramakrishnan, Kalpathi
AU - Sebe, Nicu
PY - 2013
Y1 - 2013
N2 - Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
AB - Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
UR - http://www.scopus.com/inward/record.url?scp=84875909775&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-37444-9_51
DO - 10.1007/978-3-642-37444-9_51
M3 - Conference contribution
AN - SCOPUS:84875909775
SN - 9783642374432
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 652
EP - 666
BT - Computer Vision, ACCV 2012 - 11th Asian Conference on Computer Vision, Revised Selected Papers
A2 - Lee, Kyoung Mu
A2 - Matsushita, Yasuyuki
A2 - Rehg, James M.
A2 - Hu, Zhanyi
PB - Springer
CY - Netherlands
T2 - 11th Asian Conference on Computer Vision, ACCV 2012
Y2 - 5 November 2012 through 9 November 2012
ER -