深度學習模型經常受到訓練和測試資料之間分布差異的影響,導致模型表現顯著下降。領域泛化(domain generalization)旨在解決上述分布差異的問題,其中模型僅在源域(source domain)資料上進行訓練,並期望能泛化至未見過的目標域(target domain)。在本論文中,我們探討領域泛化圖像分類問題,即模型必須對未見過的目標域中的圖像進行分類。我們提出一個可產生新穎資料域的領域生成模型。利用特徵解析技術將輸入圖像的內容特徵和領域特徵分離,並訓練一個有泛化能力的領域特徵空間來描述領域訊息。透過訓練在多個源域的資料上,此領域特徵空間為一個封閉、連續且可內插的空間,因此得以生成新穎且實際的資料域。利用現有以及生成的資料域進行學習,我們的模型具備足夠的泛化能力,並且在標準的資料集上表現優異。
Deep learning models typically suffer from the distribution shift between training and test data domains, resulting in significant performance degradation. Domain generalization (DG) tackles the aforementioned distribution shift problem, in which the model is trained solely on source domain data, and is expected to generalize to unseen target domains. In this thesis, we address the domain-generalized classification tasks, in which data in unseen target domains need to be recognized. We propose a domain hallucination approach which generates unseen data domains. We exploit feature disentanglement techniques to separate the content and domain features from the input data, and derive a disentangled and generalized domain feature space for describing domain information. Learned by observing data from multiple source domains, the derived domain feature space is a closed, continuous and interpolatable space, allowing manipulation of novel yet practical data domains. With hallucinated data across existing and novel data domains, our model exhibits sufficient generalization ability and performs favorably against state-of-the-art DG methods on benchmark datasets.