透過您的圖書館登入
IP:3.16.90.150
  • 學位論文

使用基於 ResNeXt 的網路在胸腔 X 光影像進行慢性阻塞性肺病診斷

Diagnosis of Chronic Obstructive Pulmonary Disease in Chest X-Ray Images Using a ResNeXt-based Network

指導教授 : 張瑞峰
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


慢性阻塞性肺病(Chronic Obstructive Pulmonary Disease, COPD)是一種長期呼吸道發炎且無法恢復的慢性病,目前為全球第三大死因,若能提早診斷,則可預先進行正確的治療並防止病情惡化。胸腔X光片(Chest X-ray, CXR)檢查是一種常見的輔助診斷方式,醫生藉此去評估病人的肺部發炎情況以進行後續處置。然而,要正確地判讀與解讀胸腔X光片,往往需要經驗豐富的放射科醫生。 本研究提出了一個使用CXR影像的電腦輔助診斷系統(Computer-aided Diagnosis, CAD),透過一個改良過的卷積神經網路(Convolutional neural networks, CNN)外加注意力機制的模型,進而協助醫生進行慢性阻塞性肺病診斷。本研究使用的CXR影像取自一個公開的CXR資料集(PadChest),根據是否包含品質不佳的影像去創建出兩個作為本次研究的資料集(Dunclean、Dclean),分別有37,575與35,289張影像,影像標籤為2種:患有慢性阻塞性肺病或是健康的病人;而本系統結合ResNeXt模型與IResNet模型的設計優點進行改良後,提出IResNeXt模型,根據實驗結果,IResNeXt模型之性能優於ResNeXt模型與IResNet模型;此外,本研究也嘗試加入注意力機制去提升IResNeXt模型的泛化能力。 本研究所提出的系統在含有注意力機制下,在Dunclean資料集可達到82.07%的準確率、87.04%的靈敏性、79.11%的特異性,以及0.9113的ROC曲線下面積之結果;在Dclean資料集可達到82.36%的準確率、82.70%的靈敏性、82.15%的特異性,以及0.9069的ROC曲線下面積之結果。最後,本研究使用Grad-CAM方法展示模型可視化的結果,進而證實提出的模型能正確地關注在胸腔相關的區域中,有效地協助醫生進行臨床之診斷。

並列摘要


Chronic Obstructive Pulmonary Disease (COPD) is a long-term airway inflammation and irreversible chronic lung disease that is the world's third leading cause of death. Early detection can help patients receive the correct treatment and prevent disease progression. Chest X-ray (CXR) is a common diagnostic aid for evaluating a patient's lung inflammation for subsequent treatment; however, identifying COPD symptoms from CXR images is not easy and can only be properly observed and adequately interpreted by experienced radiologists. In this study, we developed a computer-aided diagnosis (CAD) system using CXR images to assist radiologists in diagnosing COPD through an improved model of convolutional neural network (CNN) employing the attention mechanism. The CXR images used in this study were from a public CXR dataset (PadChest), and two datasets (Dunclean and Dclean) were created for this study according to whether invalid images were included, with 37,575 and 35,289 images, respectively. There are two types of image labels: COPD patients and healthy patients. Our system combines the advantages of the ResNeXt model and the IResNet model, and the IResNeXt model is proposed. According to the experimental results, the performance of the IResNeXt model is better than the ResNeXt model and the IResNet model. In addition, we also tried to add the attention mechanism to improve the generalization ability of the IResNeXt model. In our experiment results, in Dunclean dataset, the model achieved the highest ACC (82.07%), SEN (87.04%), NPV (91.09%), and AUC (0.9113); in Dclean dataset, the model achieved the highest ACC (82.36%), SPEC (82.15%), PPV (73.49%), and AUC (0.9069). Moreover, we showed the visualization results of the IResNeXt model by the Grad-CAM method, confirming that the IResNeXt model can accurately focus on related disease regions, which can help physicians identify COPD symptoms in clinical diagnosis.

並列關鍵字

COPD CXR PadChest CAD CNN attention mechanism Grad-CAM

參考文獻


J. B. Soriano et al., "Prevalence and attributable health burden of chronic respiratory diseases, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017," The Lancet Respiratory Medicine, vol. 8, no. 6, pp. 585-596, 2020.
W. H. Organization. "Global Health Estimates 2020: Deaths by Cause, Age, Sex, by Country and by Region, 2000-2019." (accessed July 30, 2022).
S. S. Salvi and P. J. Barnes, "Chronic obstructive pulmonary disease in non-smokers," The lancet, vol. 374, no. 9691, pp. 733-743, 2009.
G. C. R. D. Collaborators, "Global, regional, and national deaths, prevalence, disability-adjusted life years, and years lived with disability for chronic obstructive pulmonary disease and asthma, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015," The Lancet. Respiratory Medicine, vol. 5, no. 9, p. 691, 2017.
D. Halpin et al., "The GOLD Summit on chronic obstructive pulmonary disease in low-and middle-income countries," The International Journal of Tuberculosis and Lung Disease, vol. 23, no. 11, pp. 1131-1141, 2019.

延伸閱讀