透過您的圖書館登入
IP:18.117.142.128
  • 學位論文

基於特製隱藏式馬可夫模型之中文斷詞研究

Chinese Word Segmentation using Specialized HMM

指導教授 : 張嘉惠
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


中文斷詞在中文的自然語言處理上,是個相當基礎且非常重要的前置處理工作。中文斷詞這個領域雖然已經研究了數十年,過去也有相當多的學者提出各種斷詞演算法,但至今解決中文斷詞問題的研究仍未中斷,並且越來越受到重視。近年來的斷詞系統則較傾向於使用統計式的機器學習演算法來解決中文斷詞的問題,例如隱藏式馬可夫模型。然而,標準的隱藏式馬可夫模型在解決中文斷詞的問題上,斷詞效能F-measure約只有80% 的結果,所以許多研究都是使用外部資源或是結合其他的機器學習演算法來幫助斷詞。本研究目的是希望使用最簡單的方法,並且毋須使用任何外部資源,來提升隱藏式馬可夫模型的準確率。我們的作法是應用特製化(specialization)的概念,將中文斷詞之歧義性及未知詞的資訊帶入隱藏式馬可夫模型中,於完全不修改模型之訓練及測試過程的前提之下,透過兩階段特製化的方式,分別以擴充「觀測符號」,以及擴充「狀態符號」的方式,大大地改善了隱藏式馬可夫模型的斷詞準確性。第一階段中,我們結合了長詞優先法以及遮罩方式(Mask method),將歧義性與未知詞的資訊帶入隱藏式馬可夫模型中,使得模型擁有更多的斷詞資訊做學習。於實驗結果得知,結合最簡單的長詞優先斷詞方法,確實能大幅地提升隱藏式馬可夫模型的效能,將F-measure由0.812提升至0.953的斷詞結果。而第二階段的特製化過程中,我們使用詞彙化(lexicalization)的方式分別對高頻率及高錯誤的觀測符號,來新增狀態符號,於實驗結果也證明了,透過此階段的改良能再次提升系統效能,將斷詞結果F-measure由0.953提升至0.963。

關鍵字

none

並列摘要


The first step in Chinese language processing tasks is word segmentation. Various methods have been proposed to address this problem in previous studies, e.g. heuristic-based approaches, statistical-based approaches, etc. HMM is a statistical machine learning approach that has been successfully applied in many fields, e.g. POS tagging, shallow parsing, and so on. However, we find that standard HMM achieved only 80% results in Chinese word segmentation. As is commonly known, segmentation ambiguity and unknown word occurrence are two main problems in Chinese word segmentation. In this paper, we proposed a two-stage specialized HMM by incorporating these information into the model. In the first stage, we combine the maximum matching heuristics to incorporate segmentation ambiguity and use a masking approach to handle unknown word information. By extending the observation symbols, the proposed M-HMM is improved from 0.812 to 0.953 in F-measure. At the second stage, we use lexicalization technique to further enrich HMM performance. The idea is to add new state symbols for high frequency characters or high tagging error symbols. Experimental results show that Lexicalized M-HMM is improved from 0.953 to 0.963 in F-measure.

並列關鍵字

none

參考文獻


3. K. J. Chen and M. H. Bai. Unknown Word Detection for Chinese By a Corpus-based Learning Method. In Proceedings of ROCLING X, pp. 159–174, 1997
4. K. J. Chen and S. H. Liu. Word Identification for Mandarin Chinese Sentences. Proceedings COLING ''92, pp. 101-105, 1992
5. K. J. Chen and W. Y. Ma. Unknown Word Extraction for Chinese Documents. In Proceedings of COLING 2002, pp. 169–175, 2002
9. M. Li, J. F. Gao, C. N. Huang and J. F. Li. Unsupervised Training for Overlapping Ambiguity Resolution in Chinese Word Segmentation. In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, pp. 1–7, 2003
12. X. Luo, M. Sun and B. K. Tsou. Covering Ambiguity Resolution in Chinese Word Segmentation Based on Contextual Information. In Proceedings of COLING 2002, pp. 598-604, 2002

被引用紀錄


許桓瑜(2012)。長句斷詞法和遺傳演算法對新聞分類的影響〔碩士論文,淡江大學〕。華藝線上圖書館。https://doi.org/10.6846/TKU.2012.00488
唐若華(2010)。基於詞性之斷詞方法以改善華語語音合成系統〔碩士論文,國立清華大學〕。華藝線上圖書館。https://doi.org/10.6843/NTHU.2010.00487
方心伶(2008)。中文斷詞與注音〔碩士論文,國立清華大學〕。華藝線上圖書館。https://doi.org/10.6843/NTHU.2008.00590
吳昆璟(2008)。以信心量度改善中文斷詞之初探〔碩士論文,國立清華大學〕。華藝線上圖書館。https://doi.org/10.6843/NTHU.2008.00587
張問賢(2008)。以音斷詞與注音轉漢字〔碩士論文,國立清華大學〕。華藝線上圖書館。https://doi.org/10.6843/NTHU.2008.00585

延伸閱讀