語言測驗與語言教學和學習息息相關,然而,人工出題卻是相當費時費力的過程。近年來,電腦輔助產生試題的研究 (Computer-Assisted Item Generation,CAIG),在自然語言處理領域裡形成重要的應用研究。綜觀目前的出題研究,大都著重字彙、聽寫、克漏詞、閱讀測驗題型,很少有針對英文文法的相關研究。本論文提出一個以網路為本的作法,可以半自動產生英文文法測驗考題。 在本論文中,我們提出一種電腦輔助英文文法出題的方法。我們先將文法出題的概念,撰寫成出題樣式 (test pattern),輔以兩種出題策略 (test generation strategy),讓電腦能將網路蒐集而來的資料,配合易讀性(readability)的自動分析,產生類似托福考試(Test of English as Foreign Language,TOEFL)的兩種文法題型:傳統單選題 (traditional multiple-choice)和偵錯題(error detection)。我們也根據這個作法,製作了電腦輔助線上文法測驗系統雛形,FAST。這個實作系統不僅能即時產生文法試題,以供老師採用,亦能提供文法練習題,幫助學習者自我學習。 動詞相關(verb-related)文法題的評估顯示,我們的方法可以產生約八成合宜的文法試題,此外,將這些文法題實施於考試中,分析結果證明電腦產生的文法試題亦能有不錯的試題鑑別度。總而言之,我們的方法結合了設計文法出題樣式、出題策略和運用網路資源,有極大的潛力,可以應用在適性化數位語言學習。
Testing has long been acknowledged as an integral part of language teaching and learning. However, manually designing language tests is not only time consuming but also labor intensive. Recently, due to the remarkable progress of computer technology, computer-assisted item generation (CAIG) has drawn considerable attention and becomes one of the active research areas in CALL (Computer Assisted Language Learning). CAIG provides an alternative and economical way for automatic generation of questions in relatively short time, effective establishment of item banks in large scale, and support for adaptive testing for incremental language learning. Previous work has explored generations of reading comprehension, vocabulary, listening dictation tests, but very little has been done on grammar tests. The purpose of this thesis is to address the issue of the computer-aided creation of English grammar tests. We introduce a method for the semi-automatic generation of grammar test items by applying Natural Language Processing (NLP) techniques. Based on manually-designed patterns, sentences gathered from the Web are transformed into tests on grammaticality. The method involves representing test writing knowledge as test patterns, acquiring authentic sentences on the Web, and applying generation strategies to transform sentences into items. At runtime, sentences are converted into two types of TOEFL-style question: multiple-choice and error detection. We also describe a prototype system FAST (Free Assessment of Structural Tests). Evaluation on a set of generated questions indicates that the proposed method performs satisfactorily both in item facility and item discrimination. Our methodology provides a promising approach and offers significant potential for computer assisted language learning and assessment.