透過您的圖書館登入
IP:3.16.54.63
  • 學位論文

基於電腦視覺和深度學習的磁控膠囊內視鏡在腸道內自動導航及息肉偵測之研究

Study of magnetic controlled capsule endoscope autonomous navigation and polyp segmentation in gastrointestinal tracts based on computer vision and deep learning

指導教授 : 劉志文

摘要


隨著人工智慧開始引領潮流,電腦視覺在各行各業中得到直接應用。於此同時,深度捲積神經網絡將電腦視覺的研究推進一個新紀元。 本研究提出了一個通過使用YOLO深度學習模型來讓磁控膠囊内視鏡達到自動導航的方法。YOLO是一個最新的目標偵測深度學習模型。在本論文中我們使用KVASIR的腸道圖片資料集來驗證YOLO模型的腸腔偵測效果。 此外我們使用多任務學習模型將YOLO模型與瘜肉判別的全捲積網絡進行結合。多任務學習模型採用的是Root-Branch架構。Root的部分會共享網絡的前端部分用來提取低層的語義信息,這樣可以減少模型的體積同時也能有效地滿足高計算力的要求。Branch的部分會分別針對不同的學習任務來進行高層語義信息的提取。多任務學習模型能夠以快速和高準確率的表現,在消化道中實時的完成腸腔目標偵測以及瘜肉的自動判別。

並列摘要


As artificial intelligence starts leading the trends in various industries, computer vision can be directly applied to all walks of life. Meanwhile, the application of deep convolutional neural network has pushed computer vision research into a new era. In this paper, a novel navigation based on deep learning for magnetic field control endoscope is proposed, specifically by using deep learning model “You only look once” (YOLO). YOLO is a state-of-the-art, real-time object detection model. We use KVASIR dataset to evaluate the perfomance of lumen detetection. Furthermore, multi-task learning network is used to integrate YOLO with fully connected neural networks(FCN) which aimed for poly segmentation. The structure of multi-task learning network is Root-Branch. The root part shares former network to extract low level semantic information, which decreases the volume of the model and meet the demand of high computing efficiently. The branch part extracts precise high level semantic information for different tasks. Multi-task learning network is able to detect the lumen and segment the polyp in GI tract with high speed and accuracy performance in real time.

參考文獻


[1] G. S. Lien, C. W. Liu, J. A. Jiang, C. L. Chuang, and M. T. Teng, " Magnetic control system targeted for capsule endoscopic operations in the stomach design, fabrication, and in vitro and ex vivo evaluations," IEEE Trans. Biomed. Eng., vol. 59, no. 7, pp. 2068–2079, July 2012.
[2] X. Zabulis, A. A. Argyros, D. Tsakiris . “Lumen detection for capsule endoscopy,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 2008, pp. 3921-3926.
[3] Wang, Dan & Xie, Xiang & Li, Guolin & Yin, Zheng & Wang, Zhihua. (2014). A Lumen Detection-Based Intestinal Direction Vector Acquisition Method for Wireless Endoscopy Systems. IEEE transactions on bio-medical engineering. 10.1109/TBME.2014.2365016.
[4] C. S. Bell et al., "Image partitioning and illumination in image-based pose detection for teleoperated flexible endoscopes", Artif. Intell. Med., vol. 59, no. 3, pp. 185-196, 2013.
[5] J. Bernal, J. Sánchez, and F. Vilariño, “Towards automatic polyp detection with a polyp appearance model,” Pattern Recognit., vol. 45, no. 9, pp. 3166–3182, 2012.

延伸閱讀