透過您的圖書館登入
IP:3.15.10.137
  • 學位論文

結合模態轉換擴增資料及主動輪廓模型之卷積神經網路於醫學影像分割及其遮罩之準確率預測

Convolutional Neural Networks Combined with Modality Transfer Augmentation and Active Contour Model for Medical Image Segmentation and Its Performance Prediction

指導教授 : 藍俊宏
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


肝癌是全球盛行率高的疾病,臨床上常藉由分析醫學影像來診斷肝臟的病變情形,然而,傳統上醫學影像是醫師經由專業知識肉眼判斷臟器的位置,除了需花費許多時間與精力閱片外,亦可能產生誤判、或標準不一致。隨著電腦視覺與深度學習技術的發展,影像分割被廣泛應用於醫學影像的處理上,因此倘若能在臨床上即時地進行肝臟與腫瘤的影像分割,可以有效協助醫師進行病患術前的評估。因此,本論文旨在建立肝腫瘤的分割模型,利用卷積神經網路搭建基礎架構,設計二階段式分割流程,首先判斷肝臟的區域,再接著由肝臟的範圍來分割出腫瘤,藉此自動化圈選腫瘤位置。 醫學影像具有兩大特性,一為訓練遮罩的不精確,一般而言,肝臟與腫瘤的圈選是經由醫師判斷後手工畫記,常有邊界不完整的問題,若以此人工遮罩作為訓練資料將導致模型產生學習偏誤,無法有效並正確地分割目標臟器。為此,本研究先對不精準遮罩進行修正,利用主動輪廓模型使遮罩更貼合臟器邊緣。第二個特性是醫學影像的稀少性,醫學影像取得不易,尤其腫瘤切片資料更是高度不平衡,過少的訓練資料易使模型無法有效學習圖像特徵。因此,本研究發展基於CycleGAN的模態轉換技術,將公開的大量CT影像轉換為目標MR,並保留原始CT影像中的腫瘤區域形狀,達到資料擴充的目的,提升腫瘤分割之品質。 過往在衡量模型成效時,多以比對模型預測值與真實答案間的差異,然而當模型應用於實際場域時,面對沒有正確答案的測試資料,往往無法進行評估。為解決此問題,本研究設計一套切片審查機制,用以預測分割模型的分割表現,為分割模型的輸出提供一「信心程度」的評比,判斷模型的預測是否可靠,篩選出準確度低的切片交由醫師做進階的檢驗工作。綜合言之,本研究提出之二階段腫瘤分割結果於Dice係數達到88%,IoU有79%的準確率,優於僅用原始MR資料之訓練結果。

並列摘要


As Hepatocellular Carcinoma (HCC) prevails all over the world, the liver condition is clinically diagnosed and tracked by means of analyzing the medical images. However, the examination of the medical images needs the domain knowledge and expertise from the attending physician to manually mark the position of a target organ, making the examination time-consuming and exhausting. Besides, the manual marking process may lead to inconsistent judgment. With the advancement of computer vision and deep learning techniques, image segmentation is widely used and applied to analyze medical images. It is now clinically useful to segment the liver and tumor over the medical images and assist doctors with the pre-surgery assessment. This thesis aims at developing a liver tumor segmentation by proposing a two-stage segmentation framework based on Convolutional Neural Nets (CNN). The first stage is used to segment the liver region, and the tumor out of the corresponding liver region is marked in the second stage. It is known that two issues might concern the image segmentation: first, the ground truth masks may not be precise due to the human labeling. The incomplete and discontinuous boundaries of the semantic masks will cause the learning bias in the supervised segmentation model. To deal with the problem, we apply the active contour model to enhance the manually marked masks, making the masks much closer to the real boundaries of the liver and tumor. The second concern is on the limited sample of medical images, especially for tumor slices. Therefore, the segmentation problem is also haunted by the imbalanced data distribution, e.g., only a few slices with tumor to be learned in contrast to the normal slices. To augment the minority class, we propose a modality transfer technique based on CycleGAN. CT images are transferred to the target MR format while maintaining the tumor region and shape. Through data augmentation, the segmentation performance can be improved. To further make use of the proposed segmentation model, one needs to evaluate the performance of the predicted mask in comparison with the ground truth. However, ground truth masks do not exist when the model is implemented in the clinical scenes, i.e., domain expertise might be needed to estimate the quality of the predicted masks. Therefore, we design a slice review workflow, which is used to predict the segmentation performance based on the radiomics features extracted from the MR images. The review workflow can filter out the unconfident predicted masks and send them to the doctors for careful examination. Although human intervention is then involved, we believe this is the only way to turn the segmentation model into “assisted intelligence.” From the practical data study, our two-stage segmentation model achieves 88% dice coefficient and 79% IoU (Intersection over Union), which outperforms the original model trained only on MR images.

參考文獻


Caselles, V., Catté, F., Coll, T., Dibos, F. (1993). A geometric model for active contours in image processing. Numerische mathematik, 66(1), 1-31.
Chan, T. F., Vese, L. A. (2001). Active contours without edges. IEEE Transactions on image processing, 10(2), 266-277.
Chartsias, A., Joyce, T., Dharmakumar, R., Tsaftaris, S. A. (2017, September). Adversarial image synthesis for unpaired multi-modal cardiac data. In International workshop on simulation and synthesis in medical imaging (pp. 3-13). Springer, Cham.
Chen, L. C., Papandreou, G., Schroff, F., Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.
Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H. (2018). Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV) (pp. 801-818).

延伸閱讀