Decoding affect in videos employing the MEG brain signal

Mojtaba Khomami Abadi, Mostafa Kia, Ramanathan Subramanian, Paolo Avesani, Nicu Sebe

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

10 Citations (Scopus)

Abstract

This paper presents characterization of affect (valence and arousal) using the Magnetoencephalogram (MEG) brain signal. We attempt single-trial classification of movie and music videos with MEG responses extracted from seven participants. The main findings of this study are that: (i) the MEG signal effectively encodes affective viewer responses, (ii) clip arousal is better predicted than valence employing MEG and (iii) prediction performance is better for movie clips as compared to music videos.

Original languageEnglish
Title of host publication2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013
EditorsRama Chellappa, Xilin Chen, Qiang Ji, Maja Pantic, Stan Sclaroff, Lijun Yin
Place of PublicationUnited States
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages1-6
Number of pages6
ISBN (Print)9781467355452
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013 - Shanghai, China
Duration: 22 Apr 201326 Apr 2013

Publication series

Name2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013

Conference

Conference2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, FG 2013
Country/TerritoryChina
CityShanghai
Period22/04/1326/04/13

Fingerprint

Dive into the research topics of 'Decoding affect in videos employing the MEG brain signal'. Together they form a unique fingerprint.

Cite this