中文斷詞在中文的自然語言處理上,是個相當基礎且非常重要的工作。近年來的斷詞系統較傾向於機器學習式演算法來解決中文斷詞的問題。但使用傳統的作法,如隱藏式馬可夫模型在解決中文斷詞的問題上,無法達到較好的斷詞效能(F-measure 約80%),所以許多研究都是使用外部資源或是結合其他的機器學習演算法來幫助斷詞。然而當外部資源不易取得時,如何以簡易的方式達到準確的斷詞,則是本研究的目標。在本篇論文中我們以訓練資料所提供的詞彙建構一個辭典,並以長詞優先比對(Maximum Matching)提供正向及反向的斷詞結果做為應用序列標記之機器學習特徵函數,用以提升隱藏式馬可夫模型(HMM)及條件隨機域(CRF)序列標記的準確率。我們發現,藉由長詞優先比對,得以在完全不修改模型之訓練及測試過程的前提下,透過辭典的遮罩(Mask)及特製化(Specialized)方式,改善斷詞的效能。實驗結果顯示,長詞優先可大幅改善馬可夫模型的斷詞效能(F-measure: 0.812→0.948);而利用Mask 方式則可將斷詞效能提升至0.953;另挑選高錯誤率的字元做為特製詞,則可再次提升斷詞效能至0.963。若採用條件隨機域做為序列標記模型,則僅需透過辭典遮罩,即可將系統斷詞效能提升至0.963。
In many Chinese text processing tasks, Chinese word segmentation is a vital and required step. Various methods have been proposed to address this problem using machine learning algorithm in previous studies. In order to achieve high performance, many studies used external resources and combined with various machine learning algorithms to help segmentation. The goal of this paper is to construct a simple and effective Chinese word segmentation tool without external resources, that is, a closed test for Chinese word segmentation. We use training data to construct a vocabulary to combine maximum matching word segmentation results with sequence labeling methods including hidden Markov model (HMM) and conditional random fields (CRF). The major idea is to provide machine learning algorithm with ambiguity information via forward and backward maximum matching as well as unknown word information via vocabulary masking. The experimental results show that maximum matching and vocabulary masking can significantly improve the performance of HMM segmentation (F-measure: 0.812 → 0.948 → 0.953). Meanwhile, combining maximum matching with CRF achieves a performance with 0.953 and is improved to 0.963 via vocabulary masking.