Regression based pose estimation with automatic occlusion detection and rectification

Ibrahim Radwan, Abhinav Dhall, Jyoti Dhall, Roland Goecke

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

10 Citations (Scopus)
77 Downloads (Pure)

Abstract

Human pose estimation is a classic problem in computer vision. Statistical models based on part-based modelling and the pictorial structure framework have been widely used recently for articulated human pose estimation. However, the performance of these models has been limited due to the presence of self-occlusion. This paper presents a learning-based framework to automatically detect and recover self-occluded body parts. We learn two different models: one for detecting occluded parts in the upper body and another one for the lower body. To solve the key problem of knowing which parts are occluded, we construct Gaussian Process Regression (GPR) models to learn the parameters of the occluded body parts from their corresponding ground truth parameters. Using these models, the pictorial structure of the occluded parts in unseen images is automatically rectified. The proposed framework outperforms a state-of-the-art pictorial structure approach for human pose estimation on 3 different datasets
Original languageEnglish
Title of host publicationProceedings - IEEE International Conference on Multimedia and Expo (ICME 2012)
Place of PublicationUnited States
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages121-127
Number of pages7
ISBN (Print)9781467316590
DOIs
Publication statusPublished - 2012
Event2012 IEEE International Conference on Multimedia and Expo (ICME) - Melbourne, Melbourne, Australia
Duration: 9 Jul 201213 Jul 2012

Conference

Conference2012 IEEE International Conference on Multimedia and Expo (ICME)
Country/TerritoryAustralia
CityMelbourne
Period9/07/1213/07/12

Fingerprint

Dive into the research topics of 'Regression based pose estimation with automatic occlusion detection and rectification'. Together they form a unique fingerprint.

Cite this