Emotion Recognition Using PHOG and LPQ features

Abhinav Dhall, Akshay Asthana, Roland Goecke, Tamas Gedeon

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

200 Citations (Scopus)
4 Downloads (Pure)

Abstract

We propose a method for automatic emotion recognition as part of the FERA 2011 competition. The system extracts pyramid of histogram of gradients (PHOG) and local phase quantisation (LPQ) features for encoding the shape and appearance information. For selecting the key frames, K-means clustering is applied to the normalised shape vectors derived from constraint local model (CLM) based face tracking on the image sequences. Shape vectors closest to the cluster centers are then used to extract the shape and appearance features. We demonstrate the results on the SSPNET GEMEP-FERA dataset. It comprises of both person specific and person independent partitions. For emotion classification we use support vector machine (SVM) and largest margin nearest neighbour (LMNN) and compare our results to the pre-computed FERA 2011 emotion challenge baseline
Original languageEnglish
Title of host publication2011 IEEE International Conference on Automatic Face Gesture Recognition and Workshop
EditorsKevin Bowyer, Marian Bartlett, Rainer Stiefelhagen
Place of PublicationSanta Barbara
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages878-883
Number of pages6
ISBN (Electronic)9781424491407
ISBN (Print)9781424491414
DOIs
Publication statusPublished - 2011
Event 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) - Santa Barbara, Santa Barbara, United States
Duration: 21 Mar 201125 Mar 2011

Conference

Conference 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011)
Country/TerritoryUnited States
CitySanta Barbara
Period21/03/1125/03/11

Fingerprint

Dive into the research topics of 'Emotion Recognition Using PHOG and LPQ features'. Together they form a unique fingerprint.

Cite this