Estimating Depression Severity from Long-Sequence Face Videos via an Ensemble Global Diverse Convolutional Model

Ghazal Bargshady, Roland Goecke

Research output: A Conference proceeding or a Chapter in BookConference contributionpeer-review

Abstract

Depression is a mood disorder that has serious consequences for both individuals and society. The current diagnosis of depression relies on questionnaires and clinical interviews, which are complex and subjective, as they depend on multiple factors such as the patient’s comorbidities, cognitive ability, honesty in describing symptoms, and the experience and motivation of the clinician. An automated way of estimating depression severity objectively would, therefore, be of great assistance to clinicians. Over the last decade, various affective computing systems have been proposed that use machine learning, speech analysis, and computer vision techniques to extract unimodal or multimodal cues and estimate the severity of depression. Temporal information plays a crucial role in learning spatiotemporal patterns in depression data. When inferring depression severity from face videos, analysing long sequences at different temporal scales is potentially more informative than analysing short ones. Therefore, a new approach based on long sequence structured global convolution and various temporal scales on diverse kernels in an end-to-end ensemble model has been developed and evaluated for estimating depression severity from face video data. The application of this long-range dependency technique is novel in automated depression analysis. The proposed ensemble model explores the role of temporal scales in assessing depression severity from facial movement information and outperforms common deep learning models and single structured global convolution models. The results confirm that analysing facial movements at different temporal scales is an important component towards effective diagnostic aids in
depression analysis.
Original languageEnglish
Title of host publication2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA 2023)
Subtitle of host publicationTechniques and Applications, DICTA 2023
EditorsAnwaar UIhaq, Phillip Torr, Manaranjan Paul, Toby Walsh, Subrata Chakraborty, Shams Islam, Imran Razzak
Place of PublicationLos Alamitos (CA), USA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages296-303
Number of pages8
ISBN (Electronic)9798350382204
ISBN (Print)9798350382211
DOIs
Publication statusPublished - 29 Jan 2024
EventDICTA 2023 - Australia, Port Macquarie, Australia
Duration: 28 Nov 20231 Dec 2023
https://www.dictaconference.org/

Publication series

Name2023 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2023

Conference

ConferenceDICTA 2023
Country/TerritoryAustralia
CityPort Macquarie
Period28/11/231/12/23
Internet address

Cite this