TY - JOUR
T1 - Multimodal Fusion for Objective Assessment of Cognitive Workload
T2 - A Review
AU - Debie, Essam
AU - Fernandez Rojas, Raul
AU - Fidock, Justin
AU - Barlow, Michael
AU - Kasmarik, Kathryn
AU - Anavatti, Sreenatha
AU - Garratt, Matt
AU - Abbass, Hussein A.
N1 - Funding Information:
Manuscript received July 4, 2019; revised August 31, 2019; accepted September 2, 2019. Date of publication September 23, 2019; date of current version February 17, 2021. This work was supported in part by the Commonwealth of Australia through the Australian Army and in part by the Defence Science Partnerships agreement of the Defence Science and Technology Group, as part of the Human Performance Research Network. This article was recommended by Associate Editor C.-T. Lin. (Corresponding author: Hussein A. Abbass.) E. Debie, R. Fernandez Rojas, M. Barlow, K. Kasmarik, S. Anavatti, M. Garratt, and H. A. Abbass are with the School of Engineering & IT, University of New South Wales, Canberra, ACT 2612, Australia (e-mail: e.debie@adfa.edu.au; hussein.abbass@gmail.com).
Publisher Copyright:
© 2013 IEEE.
PY - 2021/3
Y1 - 2021/3
N2 - Considerable progress has been made in improving the estimation accuracy of cognitive workload using various sensor technologies. However, the overall performance of different algorithms and methods remain suboptimal in real-world applications. Some studies in the literature demonstrate that a single modality is sufficient to estimate cognitive workload. These studies are limited to controlled settings, a scenario that is significantly different from the real world where data gets corrupted, interrupted, and delayed. In such situations, the use of multiple modalities is needed. Multimodal fusion approaches have been successful in other domains, such as wireless-sensor networks, in addressing single-sensor weaknesses and improving information quality/accuracy. These approaches are inherently more reliable when a data source is lost. In the cognitive workload literature, sensors, such as electroencephalography (EEG), electrocardiography (ECG), and eye tracking, have shown success in estimating the aspects of cognitive workload. Multimodal approaches that combine data from several sensors together can be more robust for real-time measurement of cognitive workload. In this article, we review the published studies related to multimodal data fusion to estimate the cognitive workload and synthesize their main findings. We identify the opportunities for designing better multimodal fusion systems for cognitive workload modeling.
AB - Considerable progress has been made in improving the estimation accuracy of cognitive workload using various sensor technologies. However, the overall performance of different algorithms and methods remain suboptimal in real-world applications. Some studies in the literature demonstrate that a single modality is sufficient to estimate cognitive workload. These studies are limited to controlled settings, a scenario that is significantly different from the real world where data gets corrupted, interrupted, and delayed. In such situations, the use of multiple modalities is needed. Multimodal fusion approaches have been successful in other domains, such as wireless-sensor networks, in addressing single-sensor weaknesses and improving information quality/accuracy. These approaches are inherently more reliable when a data source is lost. In the cognitive workload literature, sensors, such as electroencephalography (EEG), electrocardiography (ECG), and eye tracking, have shown success in estimating the aspects of cognitive workload. Multimodal approaches that combine data from several sensors together can be more robust for real-time measurement of cognitive workload. In this article, we review the published studies related to multimodal data fusion to estimate the cognitive workload and synthesize their main findings. We identify the opportunities for designing better multimodal fusion systems for cognitive workload modeling.
KW - Cognitive workload
KW - decision fusion
KW - information fusion
KW - mental load
KW - multimodal data fusion
KW - task load
UR - http://www.scopus.com/inward/record.url?scp=85101145104&partnerID=8YFLogxK
U2 - 10.1109/tcyb.2019.2939399
DO - 10.1109/tcyb.2019.2939399
M3 - Review article
C2 - 31545761
VL - 51
SP - 1542
EP - 1555
JO - IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
JF - IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
SN - 1083-4419
IS - 3
M1 - 8846583
ER -