Colorectal cancer is one of the leading cause of cancer related deaths in the world. In 2012,it was reported to have claimed more than 694,000 lives worldwide ,. When diagnosed early,survival from cancerous polyps can increase up to 90% . Current reports estimate that one in 12 Australian males will develop colorectal cancer within their lifetime,one of the highest rates in the world . Among screening methods,optical colonoscopy is widely used to diagnose and remove cancerous polyps. Although optical colonoscopy is an effective procedure for diagnosing colorectal cancer,there are many factors that affect the quality of this intervention . Inspecting the whole colon surface in order to detect polyps and other lesions is challenging because haustral folds can hide lesions,the organ might deform,visibility can be reduced due to dirty lens,and also because challenges in operating the colonoscope could result in not visualising all the colon surface. An undesirable consequence is missing cancerous lesions,up to 33% according to recent publications ,. The aim of this research project is to enhance the quality of colon inspection by improving the quality and extent of the visual inspection of the internal colon surface. We developed an assistive technology to provide a panoramic view of internal colon surface (visibility map) from a colonoscopy video. A visibility map could provide feedback to clinicians about the quality of the intervention (potentially in real time),for example by increasing their awareness of uncovered areas by the video. It could also be beneficial for following up patients and tracking lesions over multiple exams. Visibility map could also be a core technology for training junior clinicians. Challenges to generate visibility maps include: (i) Colonoscopy videos comprises many uninformative frames (frame with no technical or clinical information); (ii) the colon is a flexible tubular organ that makes navigation challenging,not only for clinicians,but also for computer vision algorithms; (iii) the structure of the colon is complex with many haustral folds requiring complicated modelling. Our novel framework comprises four main phases: (i) detect uninformative frames from motion and colour features,(ii) compute camera parameters including intrinsic and extrinsic one using epipolar geometry analysis,(iii) model the colon and project the model into colonoscopy frames using camera parameters,and (iv) unroll and stich the frames,correcting camera parameters,and generate a visibility map. We leveraged the existing state of art CSIRO realistic simulator by generating test examples and validation experiments where ground truth could be known from the simulator. Our results showed that this method could detect uncovered areas and help clinicians to identify them. Although more work is required,the wide spread use of this technology could help reducing the miss rate of polyps,which will increase the quality of colonoscopy procedure and ultimately save lives.
|Date of Award||1 Jan 2016|