動機與目的: 胸部X 光影像在診斷胸部疾病中不可或缺,正確的診斷對於患者後續的治療選擇與預後相當重要,影像多樣性也增加疾病診斷的難度,造成手工解讀的方式存在變異性和出錯的可能。本研究旨在利用卷積神經網絡(CNN)和機器學習的進步提高分類胸部X 光影像的精確度和可靠性;目標是創建一個穩健的模型,為放射科醫生提供一致且精確的診斷支持。 材料與方法:本研究利用全面的NIH 胸部X 光數據庫,將15 個疾病類別的胸部X 光影像進行預處理,採用數據增強、隨機抽樣技術,建立平衡資料集進行模型的訓練,通過轉移學習微調多個預訓練的CNN 架構,包括EfficientNetB0、InceptionV3、MobileNetV2、ResNet101、ResNet50、ShuffleNet 和Xception。我們通過將這些CNN提取的特徵融入傳統的機器學習分類器如邏輯回歸(Logistic Regression)、朴素貝葉斯(Naïve Bayes)和SVM(Support Vector Machine)中。並使用準確度和Kappa 值評估模型的性能表現。 結果: 融合特徵之機器學習方法顯著提高分類準確度;SVM 分類器憑藉0.848 的準確度和0.837 的Kappa 值成為最精確的模型;而CNN 模型中ShuffleNet 準確度0.633、Kappa值0.606 和Xception 準確度0.632、Kappa 值0.605 在轉移學習後表現出色,但未能超越傳統SVM 分類器之效能。 結論: 雖然CNN 轉移學習在胸部X 光影像分類中顯示出潛力,但將其與特徵工程和傳統機器學習方法結合當前提供了更好的準確度。這一發現為未來研究開發一個最佳混合模型鋪平了道路,該模型結合了深度學習的強大分析能力和特徵工程的精確性。研究強調了在推進醫學影像分析中定制化AI 方法的重要性,並展望了一個AI 在臨床診斷中能夠顯著提供支持的未來。
Motivation and Purpose: Chest X-ray imaging is a pivotal diagnostic tool in healthcare for detecting thoracic diseases. Accurate diagnosis is crucial for subsequent treatment choices and prognosis for patients. The diversity of images also increases the complexity of disease diagnosis. The manual interpretation of these images can be subject to variability and error. The motivation behind this study is to leverage the advancements in Convolutional Neural Networks (CNN) and machine learning to improve the accuracy and reliability of chest X-ray image classification. The purpose is to develop a robust model that aids radiologists by providing consistent and accurate diagnostic support. Materials and Methods: This research utilizes a comprehensive dataset of chest X-ray images sourced from the NIH Chest X-ray Database. Preprocess chest X-ray images of 15 disease categories, employing data augmentation and random sampling techniques, to establish a balanced dataset for model training. Employ transfer learning techniques to fine-tune pre-trained CNN models, including EfficientNetB0, InceptionV3, MobileNetV2, ResNet101, ResNet50, ShuffleNet, and Xception architectures. Additionally, we integrate traditional machine learning methods by features extracted from seven transfer learning CNNs. We integrate features extracted from these CNNs into traditional machine learning classifiers such as Logistic Regression, Naïve Bayes, and Support Vector Machine (SVM). To assess the performance of the results, accuracy and the Kappa value are used to evaluate the model performance. Results: The findings reveal that integrated machine learning methods significantly enhance classification accuracy, with the SVM classifier achieving the highest accuracy (0.848) and Kappa score (0.837). Among the CNN models, ShuffleNet accuracy (0.633), Kappa score (0.606) Xception accuracy(0.632), and Kappa score (0.605) demonstrated superior performance post-transfer learning. However, they did not surpass the traditional machine learning classifiers. Conclusion: The study concludes that while CNN transfer learning shows promise in chest X-ray image classification, integrating engineered features with traditional machine learning methods yields better accuracy. This suggests a potential direction for future research in the pursuing an optimal hybrid model that combines the deep learning and feature engineering strengths. The results underscore the importance of tailored approaches in medical image analysis and point towards a future where AI could significantly aid clinical diagnostics.