乳癌在全球女性中發病率最高,在台灣也是發病率第一、死亡率第二的癌症。隨著生活方式和飲食習慣的改變,乳癌發病率逐年攀升且年輕化,40歲以下患者增加,30歲以上女性需提高警覺。定期影像檢查比自我檢查更能及早發現乳癌,目前廣泛使用於檢測年輕女性乳癌的乳房超音波,特別適合檢測亞洲年輕女性的緻密型乳房,然而乳房超音波檢測存在醫師經驗及主觀判能力、人為檢測費工且耗等問題。 近年深度學習在醫學的輔助上備受肯定,已有許多人工智慧的醫學應用,深度學習技術特別是卷積神經網絡(CNN),在醫學影像處理中已顯著提高病症辨識的輔助應用。本研究針對BI-RADS 4級乳房超音波影像進行研究,因為該級別影像顯示乳房有異常變化且懷疑有惡性腫瘤的風險,其惡性可能性為2%至95%是醫師最難判斷的級別。為了避免不必要的組織切片和病理檢驗,本研究旨在利用CNN進行腫瘤識別,並使用深度卷積生成對抗網路(DCGAN)和變分自編碼器(VAE)生成與真實資料相近的影像,擴增訓練樣本,提高模型的辨識能力,並且評估這些生成影像在擴增數據上對提升正確率的實質效用,評估的指標為: 精確率(Precision)、召回率(Recall)、F1分數(F1 Score)、ROC AUC、PR AUC。本研究希望藉由深度學習提高在醫學上的辨識輔助,協助醫生在影像判讀上的效率及穩定度,更能輔助醫生在超音波影像上的腫瘤良惡性識別。 實驗結果顯示實驗九(真實+DCGAN-VAE+2組VAE)的研究結果精確率為80.67%、召回率為0.6833、F1分數為0.7033、ROC AUC為0.87、PR AUC為0.95,所有指標上都優於實驗一(真實資料),特別是在損失值、精確率、召回率和F1分數上,顯示出更強的分類能力。顯示出加入DCGAN-VAE和2組VAE的組合對於提高模型性能非常有效,尤其是在面對不平衡數據時,這個組合比僅使用真實資料的基線模型要更為準確且穩定。
Breast cancer has the highest incidence rate among women worldwide, and it is also the leading cancer in terms of incidence and the second leading cause of cancer-related deaths in Taiwan. With changes in lifestyle and dietary habits, the incidence of breast cancer has been steadily increasing and affecting younger women, with a rising number of patients under 40 years old. Women over the age of 30 need to be particularly vigilant. Regular imaging screenings are more effective than self-examinations in the early detection of breast cancer. Currently, breast ultrasound is widely used to screen for breast cancer in younger women, especially suitable for detecting dense breast tissue common among young Asian women. However, breast ultrasound screening faces challenges such as reliance on the physician’s experience, subjective judgment, and the labor-intensive nature of the process. In recent years, deep learning has gained significant recognition for its contributions to medical assistance. Many artificial intelligence applications in medicine have emerged, with deep learning techniques, particularly Convolutional Neural Networks (CNNs), significantly enhancing the ability to assist in medical image recognition. This study focuses on the analysis of BI-RADS category 4 breast ultrasound images, as this category indicates abnormal breast changes with a suspected risk of malignant tumors, with malignancy likelihood ranging from 2% to 95%, making it the most challenging category for physicians to diagnose. To avoid unnecessary biopsies and pathological examinations, this study aims to use CNNs for tumor recognition and employs Deep Convolutional Generative Adversarial Networks (DCGAN) and Variational Autoencoders (VAE) to generate images similar to real data, augment the training samples, and enhance the model’s recognition capabilities. The study evaluates the practical utility of these generated images in augmenting data and improving accuracy, using metrics such as Precision, Recall, F1 Score, ROC AUC, and PR AUC. The goal is to leverage deep learning to enhance medical image recognition, assisting physicians in achieving greater efficiency and consistency in image interpretation, particularly in distinguishing between benign and malignant tumors in ultrasound images. The experimental results show that Experiment 9 (Real + DCGAN-VAE + 2 VAE) achieved a precision of 80.67%, a recall of 0.6833, an F1 score of 0.7033, an ROC AUC of 0.87, and a PR AUC of 0.95, outperforming Experiment 1 (real data only) in all metrics, especially in loss value, precision, recall, and F1 score, demonstrating stronger classification capabilities. This indicates that the combination of DCGAN-VAE and two types of VAE is highly effective in enhancing model performance, particularly when dealing with imbalanced data, offering greater accuracy and stability compared to the baseline model that only uses real data.