透過您的圖書館登入
IP:3.139.70.131
  • 學位論文

結合圖與上下文語言模型技術於常見問答檢索之研究

A Study on the Combination of Graphs and Contextualized Language Models for FAQ Retrieval

指導教授 : 陳柏琳
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


近年來,深度學習技術有突破性的發展,並在很多自然語言處理的相關應用領域上也有相當亮眼的效能表現。而且大量資訊快速得傳播,如何更有效地取資訊仍是一項重要的課題,其中FAQ (Frequently Asked Question)檢索任務也成為重要的技術之一。 FAQ檢索無論在電子商務服務或是線上論壇等許多領域都有廣泛的應用;其目的在於依照使用者的查詢(問題)來提供相對應最適合的答案。至今,已有出數種FAQ檢索的策略被提出,像是透過比較使用者查詢和標準問句的相似度、使用者查詢與標準問句對應的答案之間相關性,或是將使用者查詢做分類。因此,也有許多新穎的基於上下文的深層類神經網路語言模型被用於以實現上述策略;例如,BERT(Bidirectional Encoder Representations from Transformers),以及它的延伸像是K-BERT或是Sentence-BERT等。儘管BERT以及它的延伸在FAQ檢索任務上已獲得不錯的效果,但是對於需要一般領域知識的FAQ任務仍有改進空間。 因此,本論文中總共分成五大階段做研究。首先探討三種不同FAQ檢索策略同時比較不同策略和方法的結合在FAQ檢索任務之表現。第二,討論如何透過使用知識圖譜等的額外資訊來強化BERT在FAQ檢索任務上之效能,並提出利用非監督式的知識圖譜注入增進模型。第三,透過監督式方法和非監督式方法結合來改進FAQ檢索多種答案型態造成模型效果不佳之情形。第四,透過投票機制(voting mechanism)做重新排序再次改良模型效果。最後,我們透過圖卷積神經網路(Graph Convolutional Network, GCN)結合上下文語言模型(BERT)的方式使得模型可以透過建立異質圖(Heterogeneous graph)考慮到查詢(問題)之間的關聯性。我們在中文臺北市政府問答語料(TaipeiQA)進行一連串的實驗同時針對資料擴增(Data augmentation)的方法做研究探討。由實驗結果顯示,我們所提出的方法可以使得一般的FAQ檢索應用有某些程度上效果的提升。

並列摘要


Recent years have witnessed significant progress in the development of deep learning techniques, which also has achieved state-of-the-art performance for a wide variety ,of natural language processing (NLP) applications. With the rapid spread of tremendous amounts of information, how to browse the content become an essential research issue. Among them, FAQ (Frequently Asked Question) retrieval task has also become one of the important technologies. FAQ retrieval, which manages to provide relevant information in response to frequent questions or concerns, has far-reaching applications such as e-commerce services and online forums, among many other applications. In the common setting of the FAQ retrieval task, a collection of question-answer (Q-A) pairs compiled in advance can be capitalized to retrieve an appropriate answer in response to a user’s query that is likely to reoccur frequently. To date, there have many strategies proposed to approach FAQ retrieval, ranging from comparing the similarity between the query and a question, to scoring the relevance between the query and the associated answer of a question, and performing classification on user queries. As such, a variety of contextualized language models have been extended and developed to operationalize the aforementioned strategies, like BERT (Bidirectional Encoder Representations from Transformers), K-BERT and Sentence-BERT. Although BERT and its variants has demonstrated reasonably good results on various FAQ retrieval tasks, they still would fall short for some tasks that may resort to generic knowledge. In view of this, in this paper, we divided it into five major stages for research. First, we discuss three different FAQ retrieval strategies and meanwhile comparing among synergistic effects of different strategies and methods. Second, we set out to explore the utility of injecting an extra knowledge base into BERT for FAQ retrieval, and propose the method of unsupervised knowledge graph injection for model. Third, we have presented an effective, hybrid approach for FAQ retrieval, exploring the synergistic effect of combing unsupervised IR method and supervised contextual language models In addition, an effective voting mechanism to rerank answer hypotheses for better performance is proposed. Finally, we put forward construct a heterogeneous graph network and combined graph convolutional network (GCN) and contextualized language model (BERT) in order to consider about the global question-question, question-word and word-word relations which can be used to augment the embeddings derived from BERT for better FAQ retrieval. we conduct extensive experiments to evaluate the utility of the proposed approaches on a publicly-available FAQ dataset (viz. TaipeiQA), where the associated results confirm the promising efficacy of the proposed approach in comparison to some state-of-the-art ones.

參考文獻


[1] Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi, “FAQ retrieval using query-question similarity and BERT-based query-answer relevance,” In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1113–1116, 2019.
[2] Mladen Karan and Jan Šnajder, “Paraphrase-focused learning to rank for domain-specific frequently asked questions retrieval. Expert Systems with Applications,” 91: 418-433, 2018.
[3] Yu-Sheng Lai, Kuen-Lin Lee, and Chung-Hsien Wu, “Intention Extraction and Semantic Matching for Internet FAQ Retrieval Using Spoken Language Query,” In Proceedings of Research on Computational Linguistics Conference XIII, 2000.
[4] Robin D Burke, Kristian J. Hammond, Vladimir Kulyukin, Steven L. Lytinen, Noriko Tomuro and Noriko Tomuro, “Question answering from frequently asked question files: Experiences with the faq finder system,” AI magazine 18.2: 57-57, 1997.
[5] Kristian Hammond, Robin Burke, and Charles Martin, “FAQ finder: a case-based approach to knowledge navigation,” In Proceedings the 11th Conference on Artificial Intelligence for Applications, IEEE, 1995.

延伸閱讀