透過您的圖書館登入
IP:18.117.229.144
  • 學位論文

超越負語意標籤: 透過預訓練視覺語言模型的類別標籤語意集成來增強少樣本分佈外檢測效能

Beyond Negative Label: Advancing Few-Shot Out-of-Distribution Detection Performance via Positive Label Semantic Ensemble in Pretrained Vision-Language Model

指導教授 : 吳家麟
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


分佈外(Out-of-Distribution, OOD)檢測的目標是賦予模型能力,使其能識別非訓練樣本分佈的輸入。此能力對於將模型部署於真實環境,尤其在醫療診斷與自動駕駛等安全關鍵領域,顯得格外重要。傳統上,許多研究利用捲積神經網絡(CNN)透過視覺特徵來進行OOD檢測。近年來,隨著視覺語言模型(Vision-Language Model, VLM)的興起,開啟了利用這些模型的綜合處理能力進行OOD檢測的新途徑。這些模型結合了標籤的語義與視覺特徵,進行零樣本或少樣本學習,以提高模型在多變環境中的適應性和效能。在本論文中,我們提出了一種創新的類別標籤語意集成方法,利用預訓練視覺語言模型的強大知識庫,透過類別標籤語意相近的特徵,使模型學習到更精確的類別特徵。此外,我們進一步結合負語意標籤進行少樣本訓練。實驗結果顯示,在以ImageNet-1K作為分佈內資料集時,我們的方法相較於現有基於VLM的方法,在分佈外資料集上顯著降低了FPR95,平均減少了11.33個百分點,並將AUROC平均提升了2.47個百分點,顯示出顯著的效能增益.

並列摘要


Out-of-Distribution (OOD) detection aims to empower models with the ability to recognize inputs that deviate from the training sample distribution. This capability is crucial when deploying models in real-world settings, particularly in safety-critical areas such as medical diagnostics and autonomous driving systems. Traditionally, many studies have employed Convolutional Neural Networks (CNNs) to conduct OOD detection through visual features. Recently, with the advent of Vision-Language Models (VLMs), a new approach has emerged that leverages the comprehensive processing power of these models for OOD detection. These models integrate semantic and visual features of labels to facilitate zero-shot or few-shot learning, enhancing the model's adaptability and performance in diverse environments. In this work, we leverage the pretrained knowledge of VLMs and introduce an innovative method called the Positive Label Semantic Ensemble. Our model learns more precise category features by harnessing semantically related features to class labels. Additionally, we incorporate negative semantic labels in our few-shot training approach. Experimental results demonstrate that, with ImageNet-1K as the in-distribution dataset, our method significantly reduces the FPR95 by an average of 11.33 percentage points and increases the AUROC by an average of 2.47 percentage points compared to existing VLM-based methods, showing a substantial performance improvement.

參考文獻


[1] Y. H. Ahn, G.-M. Park, and S. T. Kim. Line: Out-of-distribution detection by leveraging important neurons. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 19852–19862, 2023.
[2] M. B. Ammar, N. Belkhir, S. Popescu, A. Manzanera, and G. Franchi. NECO: NEural collapse based out-of-distribution detection. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/ forum?id=9ROuKblmi7.
[3] Y. Bai, Z. Han, C. Zhang, B. Cao, X. Jiang, and Q. Hu. Id-like prompt learning for few-shot out-of-distribution detection. arXiv preprint arXiv:2311.15243, 2024. Available at https://arxiv.org/abs/2311.15243.
[4] J. Bitterwolf, M. Müller, and M. Hein. In or out? fixing imagenet out-of-distribution detection evaluation. In Proceedings of the 40th International Conference on Machine Learning, pages 105–140, 2023.
[5] S. Changpinyo, P. K. Sharma, N. Ding, and R. Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3557–3567, 2021. doi: 10.1109/CVPR46437.2021. 00356.

延伸閱讀