透過您的圖書館登入
IP:18.119.135.202
  • 期刊
  • OpenAccess

EBSUM:基於BERT的強健性抽取式摘要法

EBSUM: An Enhanced BERT-based Extractive Summarization Framework

摘要


目前大部分自動摘要方法,分為抽取式摘要(Extractive)與重寫式摘要(Abstractive),重寫式摘要雖然能夠改寫文章形成摘要,但這並不是一種有效的方式,困難點在於語意不通順、重複字等。抽取式摘要則是從文章中抽取句子形成摘要,能夠避免掉語意不通順,重複字的缺點。目前基於BERT(Bidirectional Encoder Representation from Transformers)的抽取式摘要法,多半是利用BERT取得句子表示法後,再微調模型進行摘要句子之選取。在本文中,我們提出一套新穎的基於BERT之強健性抽取式摘要法(Enhanced BERT-based Extractive Summarization Framework, EBSUM),它不僅考慮了句子的位置資訊、利用強化學習增強摘要模型與評估標準的關聯性,更直接的將最大邊緣相關性(Maximal Marginal Relevance, MMR)概念融入摘要模型之中,以避免冗餘資訊的選取。在實驗中,EBSUM在公認的摘要資料集CNN/DailyMail中,獲得相當優良的任務成效,與經典的各式基於類神經網路的摘要模型相比,EBSUM同樣可以獲得最佳的摘要結果。

並列摘要


Automatic summarization methods can be categorized into two major streams: the extractive summarization and the abstractive summarization. Although abstractive summarization is to generate a short paragraph for expressing the original document, but most of the generated summaries are hard to read. On the contrary, extractive summarization task is to extract sentences from the given document to construct a summary. Recently, BERT (Bidirectional encoder representation from transformers), which has been introduced to several NLP-related tasks and achieved remarkable results, is a pre-trained language representation method. In the context of extractive summarization, BERT is usually be used to obtain representations for sentences and documents, and then a simple model is employed to select potential summary sentences based on the inferred representations. In this paper, an enhanced BERT-based extractive summarization framework (EBSUM) is proposed. The major innovations are: first, EBSUM takes the sentence position information into account; second, in order to maximize the ROUGE score, the model is trained by the reinforcement learning strategy; third, to avoid the redundancy information, the maximal marginal relevance (MMR) criterion is incorporated with the proposed EBSUM model. In the experiments, EBSUM can outperforms several state-of-the-art models on the CNN/DailyMail corpus.

並列關鍵字

Auto-summarization Extractive BERT Reinforcement Learning MMR

參考文獻


Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer networks and ISDN systems, 30(1-7), 107-117. doi: 10.1016/S0169-7552(98)00110-X
Carbonell, J. G., & Goldstein, J. (1998). The use of MMR, diversity-based reranking for reordering documents and producing summaries. In Proceeding of SIGIR '98, 335-336. doi: 10.1145/290941.291025
Erkan, G., & Radev, D. R. (2004). Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22(1), 457-479. doi: 10.1613/jair.1523
Fan, W., & Gordon, M. D. (2014). The power of social media analytics. Commun. Acm, 57(6), 74-81. doi: 10.1145/2602574
Gehrmann, S., Deng, Y., & Rush, A. M. (2018). Bottom-up abstractive summarization. In Proceedings of EMNLP 2018, 4098–4109. doi: 10.18653/v1/D18-1443. In arXiv preprint arXiv:1808.10792

延伸閱讀