TY - JOUR
T1 - DECAF
T2 - MEG-Based Multimodal Database for Decoding Affective Physiological Responses
AU - Abadi, Mojtaba Khomami
AU - Subramanian, Ramanathan
AU - Kia, Seyed Mostafa
AU - Avesani, Paolo
AU - Patras, Ioannis
AU - Sebe, Nicu
N1 - Publisher Copyright:
© 2010-2012 IEEE.
Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.
PY - 2015/7/1
Y1 - 2015/7/1
N2 - In this work, we present DECAF - a multimodal data set for decoding user physiological responses to affective multimedia content. Different from data sets such as DEAP [15] and MAHNOB-HCI [31], DECAF contains (1) brain signals acquired using the Magnetoencephalogram (MEG) sensor, which requires little physical contact with the user's scalp and consequently facilitates naturalistic affective response, and (2) explicit and implicit emotional responses of 30 participants to 40 one-minute music video segments used in [15] and 36 movie clips, thereby enabling comparisons between the EEG versus MEG modalities as well as movie versus music stimuli for affect recognition. In addition to MEG data, DECAF comprises synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapezius-Electromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF's utility, we present (i) a detailed analysis of the correlations between participants' self-assessments and their physiological responses and (ii) single-trial classification results for valence, arousal and dominance, with performance evaluation against existing data sets. DECAF also contains time-continuous emotion annotations for movie clips from seven users, which we use to demonstrate dynamic emotion prediction.
AB - In this work, we present DECAF - a multimodal data set for decoding user physiological responses to affective multimedia content. Different from data sets such as DEAP [15] and MAHNOB-HCI [31], DECAF contains (1) brain signals acquired using the Magnetoencephalogram (MEG) sensor, which requires little physical contact with the user's scalp and consequently facilitates naturalistic affective response, and (2) explicit and implicit emotional responses of 30 participants to 40 one-minute music video segments used in [15] and 36 movie clips, thereby enabling comparisons between the EEG versus MEG modalities as well as movie versus music stimuli for affect recognition. In addition to MEG data, DECAF comprises synchronously recorded near-infra-red (NIR) facial videos, horizontal Electrooculogram (hEOG), Electrocardiogram (ECG), and trapezius-Electromyogram (tEMG) peripheral physiological responses. To demonstrate DECAF's utility, we present (i) a detailed analysis of the correlations between participants' self-assessments and their physiological responses and (ii) single-trial classification results for valence, arousal and dominance, with performance evaluation against existing data sets. DECAF also contains time-continuous emotion annotations for movie clips from seven users, which we use to demonstrate dynamic emotion prediction.
KW - Affective computing
KW - Emotion recognition
KW - MEG
KW - Single-trial classification
KW - User physiological responses
UR - http://www.scopus.com/inward/record.url?scp=84940987310&partnerID=8YFLogxK
U2 - 10.1109/TAFFC.2015.2392932
DO - 10.1109/TAFFC.2015.2392932
M3 - Article
AN - SCOPUS:84940987310
SN - 1949-3045
VL - 6
SP - 209
EP - 222
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
IS - 3
M1 - 7010926
ER -