透過您的圖書館登入
IP:18.223.196.59
  • 學位論文

應用FCN深度學習對肺結節低劑量電腦斷層影像進行分割之研究

The Study of Segmented Lung Nodules from Low Dose CT by Using Deep Learning FCN Methods

指導教授 : 陳泰賓

摘要


目的:電腦斷層掃描(Computed Tomography, CT)在早期診斷肺結節是一重要的影像學檢查工具之一。目前低劑量肺部電腦斷層掃描(Low-dose Lung Computerized Tomography, LDCT)已普遍用於健康檢查。但是,由於LDCT切片數很多,有時難以檢測到小結節。因此採用全卷積類神經網路(Fully Convolutional Neural Network, FCN)用於標記LDCT的小結節;主要目的是以高可行性和準確性分割肺結節的邊界。 材料與方法:本研究實驗設計為回顧性研究;共收集肺部結節病例126名有效受試者。FCN採用ResNet18、ResNet50以及MobileNetv2三種全卷積神經網路。影像預處理包括保留肺部區域以及轉換為JPG彩色格式影像。同時探討3個FCN模型、3種訓練比率、5種Batch Size、5種Epoch、2種收斂函數共450種參數組合;訓練、測試和驗證集將應用隨機拆分為70%、20%和10%。模型效能評估採用整體準確率(Global Accuracy)、平均準確率(Mean Accuracy)、平均交疊率(Mean Intersection over Union, IoU)、加權交疊率(Weighted IoU)及平均邊界F-1分數(Mean Boundary F-1 Score, BF)。 結果: 經由驗證集結果顯示,最佳模型為MobileNetv2其整體準確率(Global Accuracy)、平均準確率(Mean Accuracy)、平均交疊率(Mean Intersection over Union, IoU)、加權交疊率(Weighted IoU)及平均邊界F-1分數(Mean Boundary F-1 Score, BF)分別為0.9999、0.9637、0.9491、0.9999、0.9995。 結論: 透過影像預處理以及JPG彩色格式影像,能夠有效提供FCN對LDCT肺結節進行有效性分割。未來將增加案例數以提高切割效能,同時能應用全3D神經網路模型於病灶分割之應用。

並列摘要


Purpose: Computed Tomography plays a crucial role in the diagnosis of lung nodules at the early stage. Hence, low-dose lung computerized tomography (LDCT) has commonly used to study health examinations nowadays. However, due to the high throughput of LDCT, the small nodules were hard to detect by expertise. Therefore, the fully convolutional neural network (FCN) was applied to detect small nodules of LDCT in this study. The main aim was to segment the boundary of lung nodules with high feasibility and accuracy. Methods and Materials: The retrospective study was designed in this study. A total of 126 effective subjects were collected with a lung nodule. The FCN can adopt popular CNN (Convolutional Neuro Network) algorithms to the computational framework, including ResNet18, ResNet50, and MobileNetv2. The image preprocessing included keeping the lung regions, converted to JPG format, and resizing the JPG image. Simultaneously discuss three FCN models, three split ratios, five batch sizes, five epochs, and two solver functions. Therefore, a total of 450 parameter combinations were designed in this study. The training, testing, and validating sets were 70%, 20%, and 10% of involved data using the random splitting approach. The evaluation of segmented performance was adopted global accuracy, mean accuracy, mean IoU (Intersection over Union), weighted IoU, and boundary F1 score (BF Score). Results: The results of the validation set show that the best model is MobileNetv2 with its global accuracy, mean accuracy, mean IoU, weighted IoU, and BF Score was 0.9999, 0.9637, 0.9491, 0.9999, 0.9995, respectively. Conclusions: Through image preprocessing and JPG format images, it can effectively provide FCN for effective segmentation of LDCT lung nodules. The presented methods were feasibly and reasonably to segment the nodule of LDCT with feasible efficiency and reasonable precision. In the future, the number of cases should be increased to improve the segmented performance. Meanwhile, a full 3D neural network model might be taken into consideration to do lesion segmentation.

並列關鍵字

FCN Lung Nodule LDCT

參考文獻


[1]Sluimer I, Prokop M, Van Ginneken B.Toward automated segmentation of the pathological lung in ct. IEEE Trans Med Imaging 2005,24(8):1025–1038.
[2]Adams R, Bischof L. Seeded region growing. IEEE Trans Pattern Anal Mach Intell 1994,16(6):641–647.
[3]Krishnamurthy S, Narasimhan G, Rengasamy U.Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives. Proceedings of the Institution of Mechanical Engineers Part H: Journal of Engineering in Medicine 2016,230(1):58–70.
[4]Khordehchi EA, Ayatollahi A, Daliri MR.Automatic lung nodule detection based on statistical region merging and support vector machines. Image Analysis & Stereology 2017, 36(2):65–78.
[5]Boykov Y, Jolly MP. Interactive organ segmentation using graph cuts. International conference on medical image computing and computer-assisted intervention. 2000,3(2): 276–286.

延伸閱讀