本論文著重在語言學習者的介係詞改錯。我們將介係詞改錯視為是一種分類的機 制。與前人不同的地方在於,我們利用網路規模的語料庫取代前人常用的手工編 纂的參考語料。網路規模的語料庫包含了相當豐富的資訊,這些大量 s 資訊使得 分類器能夠更精準地判斷正確的介係詞用法。本篇論文實做了一系列的實驗,目 地是為了觀察利用網路規模語料庫訓練的模型的各方面表現。除此之外,我們也 提出了一個新方法,利用介係詞周遭的 n 連詞來判斷各種介係詞在某些句子中的 可能性。 網路規模的語料庫資料量足以改善模型在使用者作文中的準確度。最 後的評估證明了我們的方法在使用者學習資料上確實能夠改善介係詞改錯的正 確性。利用此一方法所製作的寫作輔助系統,能夠幫助學習者更有效率地學習正 確的介系詞用法。 We address the problem of correcting preposition error in learner’s writing. We treat the preposition correction problem as a classification problem and train a statistic classifier to predict the correct preposition. Different from most work in error correction, our method uses web-scale corpus as training data instead of human compiled reference corpus. Web-scale corpus contains enormous context information which provides broad coverage of contexts for the model to predict correct prepositions. In this paper, a set of experiments is proposed to make observation of models trained on web-scale corpus. We also introduce a novel method to correct prepositions especially for that in learner’s writing. Using the n-grams spanning the prepositions, we calculate the likelihood of all prepositions. The amount of n-gram from web-scale corpus is large enough to make a better prediction. Evaluation shows that the method outperforms the other models also trained on web-scale corpus. Our method effectively determines adequate prepositions in learner’s writing, suggesting the possibility of using our methods as writing-assisted tool for English learners to learn the correct preposition usage.