本研究評估具自動決定超參數的深度學習框架 nnU-Net 在顱內放射手術中對危及器官自動分割的表現。基於先前腫瘤分割的研究成果,我們將研究重點擴展至危及器官的分割,以彌補現有文獻中發現的不足之處。根據最新研究進展,我們利用多模態影像開發了用於危及器官自動分割的模型。本研究使用台大醫院電腦刀中心的大規模數據集進行實驗,著重於六個重要器官:腦幹、雙側眼球、視交叉和雙側視神經。實驗方法包含三個部分:評估不同影像模態的分割準確度、探討聯合危及器官和靶體積分割的可行性,以及分析危及器官和腫瘤之間的距離關係。研究結果顯示,整合多模態的三維低解析度模型達到最佳表現。然而,視交叉因其體積小,在自動及手動描繪上都面臨重大挑戰,因此呈現相對較低的分割準確度。儘管如此,我們的危及器官模型在腦幹分割上展現了與專家手動描繪相當的準確度,並成功驗證了聯合的危及器官-目標體積分割模型的可行性,為臨床自動化應用提供了新的方向。
This study evaluates the performance of nnU-Net, a deep learning framework that automatically determines hyperparameters, in the automatic segmentation of organs at risk (OARs) for intracranial radiosurgery. Building upon previous research on tumor segmentation, we expanded our focus to include the segmentation of OARs to address the insufficiencies identified in the existing literature. Based on recent advances, we developed an automatic model for OAR segmentation utilizing multimodal imaging. The study utilizes a large-scale dataset from the CyberKnife Center at National Taiwan University Hospital, focusing on six critical organs: brainstem, bilateral eyes, optic chiasm, and bilateral optic nerves. Our experimental methodology consists of three components: evaluating segmentation accuracy across different imaging modalities, examining the feasibility of joint OAR and target volume (TV) segmentation, and analyzing their spatial relationships. The results demonstrate that the 3D low-resolution model with multimodal integration achieves optimal performance. However, the optic chiasm exhibits relatively lower segmentation accuracy, as its small volume poses significant challenges for both automatic and manual delineation. Nevertheless, our OAR model demonstrates accuracy comparable to expert manual delineation in brainstem segmentation and successfully validates the feasibility of joint OAR-TV segmentation models, offering new directions for clinical automation applications.