透過您的圖書館登入
IP:3.135.219.133
  • 學位論文

嵌入角色與影片資訊的多人對話文本問答

Multi-Party Dialogue Question Answering with Role and Video Information

指導教授 : 徐宏民
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


多人對話文本問答是話語與語言處理領域中的新興主題,其目的為根據多人對話的內容回答問題。過去的研究將對話視為一般文章並嘗試使用普通的語言模型解決問題,而忽略了對話理解當中的重要屬性,包含角色認知與情境理解。對此,我們從新的角度切入,在這篇論文中提出了兩種方法來協助這些模型。(1)得知角色資訊的多人對話模型(RAMPNet),利用了說話者與角色的資訊表現出「誰在說話」與「誰被提及」,讓我們的模型獲得角色認知的訊息。實驗顯示出此模型在多人對話文本問答中的能力與有效性,並在仰賴角色關係的問題中特別顯著。(2)影片關聯的多任務,透過利用電視劇的影片資訊來了解對話中複雜的情境。然而,實驗指出了此方式對語言模型學習角色知識的阻礙,也顯示出現行多人對話文本問答資料集的限制。

並列摘要


Multi-party dialogue question answering (MPDQA) is an emerging topic in speech and language processing where the goal is to answer the questions according to the multi-party conversations. Prior works treat dialogues as plain passages and tried to solve them with simple language models, neglecting important properties of dialogue comprehension, such as role awareness and situation realization. In a novel aspect, this paper proposes two methods to help these language models. (1) The Role Aware Multi-Party Network (RAMPNet) utilizes the information of speaker and role to present ``who is speaking'' and ``who is mentioned'', making role awareness an available message for our model. Experiments show the model's capability and utility on the MPDQA task, especially on certain types of questions relying on role relations. (2) Video-related multitasks make use of video information from TV shows to realize the complex situation in dialogues. Nevertheless, the experiments point out the hindrance of the method to the language model's role knowledge learning, showing the limitation of the existing MPDQA datasets.

參考文獻


N. Asher, J. Hunter, M. Morey, B. Farah, and S. Afantenos. Discourse structure anddialogueactsinmultipartydialogue: theSTACcorpus. InLREC,pages2721–2727.ELRA, 2016.
H.Y.Chen,E.Zhou,andJ.D.Choi. Robustcoreferenceresolutionandentitylinkingon dialogues: Character identification on TV show transcripts. InCoNLL, pages216–225. ACL, 2017.
Y.­H. Chen and J. D. Choi. Character identification on multiparty conversation:Identifying mentions of characters in TV shows. InSIGDIAL, pages 90–100. ACL,2016.
E. Choi, H. He, M. Iyyer, M. Yatskar, W.­t. Yih, Y. Choi, P. Liang, and L. Zettle­moyer. QuAC:Questionansweringincontext. InEMNLP,pages2174–2184.ACL,2018.
J.Devlin,M.­W.Chang,K.Lee,andK.Toutanova. BERT:Pre­trainingofdeepbidi­rectional transformers for language understanding. InNAACL:HumanLanguageTechnologies,Volume1(LongandShortPapers), pages 4171–4186. ACL, 2019.

延伸閱讀