透過您的圖書館登入
IP:216.73.216.23
  • 學位論文

基於辯論歷程之反論點生成

Counter-argument generation with debating history

指導教授 : 陳信希
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


反論點生成是自然語言處理中非常具挑戰性的研究領域,它可能同時牽涉到許多子問題,例如論點探勘、自然語言生成、自然語言理解甚至資訊檢索。截至目前為止,關於反論點生成的研究只有探討單一來回情境下的生成,也就是只給定一段含有多個論點的論述並生成反論點。然而,在現實的辯論當中,一個結辯通常是透過一連串的來回討論而來,因此,一個生成反論點的模型應該需要具備組織理解多個來回之辯論歷程的能力。這篇論文有兩個主要的貢獻。首先,這是第一篇將辯論歷程引入反論點生成的文章,接著,我們建立了一個大規模的資料集、用以訓練反論點的生成模型。為了能更深入了解辯論歷程對於反論點生成的重要性,我們用數個不同的模型來做實驗,實驗結果顯示當引入辯論歷程後,模型能夠生成更加適切的反論點。

並列摘要


Counter-argument generation is one of the most challenging problems in natural language processing as it involves many sub-problems like argument mining (AM), natural language generation (NLG), language understanding, or even information retrieval (IR). To date, researches on counter-argument generation only address the scenario of single-turn debate, that is, they generate counter-arguments according to one statement of someone's viewpoints. Nevertheless, in real-world debating, an argumentative conclusion usually comes along with multiple turns of discussion. Thus, an argument generation system should have the capability to model multi-turn discussion history. This thesis has two main contributions. First, this research is the first one exploring the task of counter-argument generation with multi-turn debating history context. Second, we construct a large-scale dataset which contains around 800k counter-arguments for training the generator. To further investigate the importance of debating history, we experiment with different models. The result shows that by incorporating the information of debating history, the model can generate more appropriate counter-arguments.

參考文獻


J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014, 2014.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, Nov. 1997.
X. Hua, Z. Hu, and L. Wang. Argument generation with retrieval, planning, and realization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2661–2672, 2019.
X. Hua and L. Wang. Neural argument generation augmented with externally retrieved evidence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 219–230, Melbourne, Australia, July 2018. Association for Computational Linguistics.
X.Hua and L.Wang. Sentence-level content planning and style specific cation for neural text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, 2019. Association for Computational Linguistics.

延伸閱讀