透過您的圖書館登入
IP:3.144.187.103
  • 學位論文

兼顧新聞立場之標題產生方法研究

Learning to Generate News Headlines with Media’s Stance

指導教授 : 陳信希
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


隨著類神經網路的興起,許多自然語言處理的研究有了全新的進展。文字生成的研究是其中之一,類神經網路能理解複雜的語言邏輯、並生成類似人寫的句子。除了使用類神經網路來加強傳統的文字生成任務,像是機器翻譯和文章摘要等外,其他研究也開始嘗試在文字生成時,加入各種條件像是時態、字數、情緒等。除了文字生成外,類神經網路也常被應用在一些自然語言處理相關的分類任務,立場的偵測和分類就是其中一個熱門的研究。受到文字生成和立場分類的啟發,本論文嘗試產生符合特定台灣媒體立場的新聞標題

並列摘要


As neural network model thrives, nature language processing enters into a new chapter. Powerful models motivate the innovation and renovation of text generation tasks. Text generation tasks are no longer the simple task like text summarization or machine translation, they try to generate text with a variety of novel conditions, e.g., sentence length, tense and sentiment. Neural models also have a great success in some classification task. Stance classification is one of popular research topics. Inspired from conditional text generation and stance classification, we innovate a task to generate news headline with specific stances of Taiwan’s news media.

參考文獻


Chang, C.-T., Huang, C.-C., Yang, C.-Y., & Hsu, J. Y.-J. (2018). A Hybrid Word-Character Approach to Abstractive Summarization. arXiv preprint arXiv:1802.09968.
Chen, W.-F., Wachsmuth, H., Al Khatib, K., & Stein, B. (2018). Learning to Flip the Bias of News Headlines. Paper presented at the Proceedings of the 11th International Conference on Natural Language Generation.
Chen, Y.-C., & Bansal, M. (2018). Fast abstractive summarization with reinforce-selected sentence rewriting. arXiv preprint arXiv:1805.11080.
Dai, Z., Yang, Z., Yang, Y., Cohen, W. W., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.
Du, J., Xu, R., He, Y., & Gui, L. (2017). Stance classification with target-specific neural attention networks.

延伸閱讀