A Multilevel Fusion Approach for Audiovisual Emotion Recognition

Girija CHETTY, Michael WAGNER, Roland GOECKE

Research output: A Conference proceeding or a Chapter in BookChapter

7 Citations (Scopus)
1 Downloads (Pure)


This chapter addresses the aspects of facial expression quantification to detect low, medium, and high levels of expressions. It develops an automatic emotion classification technique for recognizing six different facial emotions-anger, disgust, fear, happiness, sadness, and surprise. The authors evaluated two different facial features for this purpose: facial deformation features and marker-based features for extracting facial expression features. The results show that the sectored volumetric difference function (SVDF/VDF) shape transformation features allow better quantification of facial expressions as compared to marker-based features. The further plans for this research will be to find better methods to fuse audiovisual information that can model the dynamics of facial expressions and speech. Segmental level acoustic information can be used to trace the emotions at a frame level.

Original languageEnglish
Title of host publicationEmotion Recognition: A Pattern Analysis Approach
Subtitle of host publicationA Pattern Analysis Approach
EditorsAmit Konar, Aruna Chakraborty
Place of PublicationUSA
PublisherJohn Wiley & Sons
Number of pages24
ISBN (Electronic)9781118910566
ISBN (Print)9781118130667
Publication statusPublished - 2 Jan 2015

Publication series

NameEmotion Recognition: A Pattern Analysis Approach


Dive into the research topics of 'A Multilevel Fusion Approach for Audiovisual Emotion Recognition'. Together they form a unique fingerprint.

Cite this