Abstract
Depressed subjects have been shown to respond dierently to images of positive and negative content, when compared with non-depressed subjects. The underlying cause could be the impaired inhibition of negative aect, which has been found in depressed patients across several studies. We describe the techniques used in an ongoing study to compare the clinical diagnosis of depression with automatically measured facial activity and expressions. Video recordings are made of patients and control subjects watching a series of lm clips, portraying negative and positive content. Subjectspeci c Active Appearance Models are built in order to extract visual features from the faces within frames captured from the videos. The raw feature data is then used to measure each participant`s facial activity and to train Support Vector Machines for recognition of facial expressions.
Original language | English |
---|---|
Title of host publication | Proceedings of the MMCogEmS Workshop 2011 at the 13th Int. Conf. on Multimodal Interaction |
Editors | Fang Chen, Julien Epps, Natalie Ruiz, Eric Choi |
Place of Publication | Sydney, Australia |
Publisher | NICTA |
Pages | 1-2 |
Number of pages | 2 |
Publication status | Published - 2011 |
Event | MMCogEmS Workshop 2011 at the 13th International Conference on Multimodal Interaction 2011 - Alicante, Alicante, Spain Duration: 14 Nov 2011 → 18 Nov 2011 |
Conference
Conference | MMCogEmS Workshop 2011 at the 13th International Conference on Multimodal Interaction 2011 |
---|---|
Country/Territory | Spain |
City | Alicante |
Period | 14/11/11 → 18/11/11 |