TY - JOUR
T1 - Enhanced deep learning algorithm development to detect pain intensity from facial expression images
AU - Bargshady, Ghazal
AU - Zhou, Xujuan
AU - Deo, Ravinesh C.
AU - Soar, Jeffrey
AU - Whittaker, Frank
AU - Wang, Hua
N1 - Funding Information:
This study was funded by the Australian Research Council (ARC) (grant number LP150100673 ). The authors declare no conflict of interest.
Publisher Copyright:
© 2020 Elsevier Ltd
PY - 2020/7/1
Y1 - 2020/7/1
N2 - Automated detection of pain intensity from facial expressions, especially from face images that show a patient's health, remains a significant challenge in the medical diagnostics and health informatics area. Expert systems that prudently analyse facial expression images, utilising an automated machine learning algorithm, can be a promising approach for pain intensity analysis in health domain. Deep neural networks and emerging machine learning techniques have made significant progress in both the feature identification, mapping and the modelling of pain intensity from facial images, with great potential to aid health practitioners in the diagnosis of certain medical conditions. Consequently, there has been significant research within the pain recognition and management area that aim to adopt facial expression datasets into deep learning algorithms to detect the pain intensity in binary classes, and also to identify pain and non-pain faces. However, the volume of research in identifying pain intensity levels in multi-classes remains rather limited. This paper reports on a new enhanced deep neural network framework designed for the effective detection of pain intensity, in four-level thresholds using a facial expression image. To explore the robustness of the proposed algorithms, the UNBC-McMaster Shoulder Pain Archive Database, comprised of human facial images, was first balanced, then used for the training and testing of the classification model, coupled with the fine-tuned VGG-Face pre-trainer as a feature extraction tool. To reduce the dimensionality of the classification model input data and extract most relevant features, Principal Component Analysis was applied, improving its computational efficiency. The pre-screened features, used as model inputs, are then transferred to produce a new enhanced joint hybrid CNN-BiLSTM (EJH-CNN-BiLSTM) deep learning algorithm comprised of convolutional neural networks, that were then linked to the joint bidirectional LSTM, for multi-classification of pain. The resulting EJH-CNN-BiLSTM classification model, tested to estimate four different levels of pain, revealed a good degree of accuracy in terms of different performance evaluation techniques. The results indicated that the enhanced EJH-CNN-BiLSTM classification algorithm was explored as a potential tool for the detection of pain intensity in multi-classes from facial expression images, and therefore, can be adopted as an artificial intelligence tool in the medical diagnostics for automatic pain detection and subsequent pain management of patients.
AB - Automated detection of pain intensity from facial expressions, especially from face images that show a patient's health, remains a significant challenge in the medical diagnostics and health informatics area. Expert systems that prudently analyse facial expression images, utilising an automated machine learning algorithm, can be a promising approach for pain intensity analysis in health domain. Deep neural networks and emerging machine learning techniques have made significant progress in both the feature identification, mapping and the modelling of pain intensity from facial images, with great potential to aid health practitioners in the diagnosis of certain medical conditions. Consequently, there has been significant research within the pain recognition and management area that aim to adopt facial expression datasets into deep learning algorithms to detect the pain intensity in binary classes, and also to identify pain and non-pain faces. However, the volume of research in identifying pain intensity levels in multi-classes remains rather limited. This paper reports on a new enhanced deep neural network framework designed for the effective detection of pain intensity, in four-level thresholds using a facial expression image. To explore the robustness of the proposed algorithms, the UNBC-McMaster Shoulder Pain Archive Database, comprised of human facial images, was first balanced, then used for the training and testing of the classification model, coupled with the fine-tuned VGG-Face pre-trainer as a feature extraction tool. To reduce the dimensionality of the classification model input data and extract most relevant features, Principal Component Analysis was applied, improving its computational efficiency. The pre-screened features, used as model inputs, are then transferred to produce a new enhanced joint hybrid CNN-BiLSTM (EJH-CNN-BiLSTM) deep learning algorithm comprised of convolutional neural networks, that were then linked to the joint bidirectional LSTM, for multi-classification of pain. The resulting EJH-CNN-BiLSTM classification model, tested to estimate four different levels of pain, revealed a good degree of accuracy in terms of different performance evaluation techniques. The results indicated that the enhanced EJH-CNN-BiLSTM classification algorithm was explored as a potential tool for the detection of pain intensity in multi-classes from facial expression images, and therefore, can be adopted as an artificial intelligence tool in the medical diagnostics for automatic pain detection and subsequent pain management of patients.
KW - Deep neural networks
KW - Expert systems in healthcare
KW - Facial expression
KW - Machine learning
KW - Pain detection
UR - http://www.scopus.com/inward/record.url?scp=85079873981&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2020.113305
DO - 10.1016/j.eswa.2020.113305
M3 - Article
AN - SCOPUS:85079873981
SN - 0957-4174
VL - 149
SP - 1
EP - 10
JO - Expert Systems with Applications
JF - Expert Systems with Applications
M1 - 113305
ER -