TY - JOUR
T1 - A low resource 3D U-Net based deep learning model for medical image analysis
AU - Chetty, Girija
AU - Yamin, Mohammad
AU - White, Matthew
N1 - Funding Information:
The authors are thankful for the publicly available challenge dataset provided by MICCAI Society https://www.med.upenn.edu/sbia/brats2018/data.html , and our preliminary findings in [1 ] and performance benchmarks from the challenge organizers [12 ], used a baseline performance reference in this study.
Publisher Copyright:
© 2022, This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply.
PY - 2022/2
Y1 - 2022/2
N2 - The success of deep learning, a subfield of Artificial Intelligence technologies in the field of image analysis and computer can be leveraged for building better decision support systems for clinical radiological settings. Detecting and segmenting tumorous tissues in brain region using deep learning and artificial intelligence is one such scenario, where radiologists can benefit from the computer based second opinion or decision support, for detecting the severity of disease, and survival of the subject with an accurate and timely clinical diagnosis. Gliomas are the aggressive form of brain tumors having irregular shape and ambiguous boundaries, making them one of the hardest tumors to detect, and often require a combined analysis of different types of radiological scans to make an accurate detection. In this paper, we present a fully automatic deep learning method for brain tumor segmentation in multimodal multi-contrast magnetic resonance image scans. The proposed approach is based on light weight UNET architecture, consisting of a multimodal CNN encoder-decoder based computational model. Using the publicly available Brain Tumor Segmentation (BraTS) Challenge 2018 dataset, available from the Medical Image Computing and Computer Assisted Intervention (MICCAI) society, our novel approach based on proposed light-weight UNet model, with no data augmentation requirements and without use of heavy computational resources, has resulted in an improved performance, as compared to the previous models in the challenge task that used heavy computational architectures and resources and with different data augmentation approaches. This makes the model proposed in this work more suitable for remote, extreme and low resource health care settings.
AB - The success of deep learning, a subfield of Artificial Intelligence technologies in the field of image analysis and computer can be leveraged for building better decision support systems for clinical radiological settings. Detecting and segmenting tumorous tissues in brain region using deep learning and artificial intelligence is one such scenario, where radiologists can benefit from the computer based second opinion or decision support, for detecting the severity of disease, and survival of the subject with an accurate and timely clinical diagnosis. Gliomas are the aggressive form of brain tumors having irregular shape and ambiguous boundaries, making them one of the hardest tumors to detect, and often require a combined analysis of different types of radiological scans to make an accurate detection. In this paper, we present a fully automatic deep learning method for brain tumor segmentation in multimodal multi-contrast magnetic resonance image scans. The proposed approach is based on light weight UNET architecture, consisting of a multimodal CNN encoder-decoder based computational model. Using the publicly available Brain Tumor Segmentation (BraTS) Challenge 2018 dataset, available from the Medical Image Computing and Computer Assisted Intervention (MICCAI) society, our novel approach based on proposed light-weight UNet model, with no data augmentation requirements and without use of heavy computational resources, has resulted in an improved performance, as compared to the previous models in the challenge task that used heavy computational architectures and resources and with different data augmentation approaches. This makes the model proposed in this work more suitable for remote, extreme and low resource health care settings.
KW - AI
KW - Deep learning
KW - Fusion
KW - Medical
KW - Multimodal
KW - Segmentation
UR - http://www.scopus.com/inward/record.url?scp=85122324443&partnerID=8YFLogxK
U2 - 10.1007/s41870-021-00850-4
DO - 10.1007/s41870-021-00850-4
M3 - Article
AN - SCOPUS:85122324443
SN - 2511-2104
VL - 14
SP - 95
EP - 103
JO - International Journal of Information Technology (Singapore)
JF - International Journal of Information Technology (Singapore)
IS - 1
ER -