帳號:guest(13.59.165.130)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士以作者查詢全國書目勘誤回報
作者(中):余昊祥
作者(英):Yu, Hao-Hsiang
論文名稱(中):強化深度學習對於自然語言處理的強韌度-以假新聞偵測為例
論文名稱(英):Enhancing Deep Learning Robustness for Nature Language Processing : Fake News Detection as an Example
指導教授(中):胡毓忠
指導教授(英):Hu,Yuh­-Jong
口試委員:張家銘
黃瀚萱
口試委員(外文):Chang, Jia-Ming
Huang, Hen-Hsen
學位類別:碩士
校院名稱:國立政治大學
系所名稱:資訊科學系碩士在職專班
出版年:2022
畢業學年度:110
語文別:中文
論文頁數:42
中文關鍵詞:假新聞偵測對抗式攻擊假新聞偵測
英文關鍵詞:Fake news detectionAdversarial attackAdversarial DefenceTextFooler
Doi Url:http://doi.org/10.6814/NCCU202201381
相關次數:
  • 推薦推薦:0
  • 點閱點閱:79
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:25
  • gshot_favorites title msg收藏:0
因為互聯網與社群媒體的推波助瀾,網路新聞已經成為重要的新聞來源。近幾年因為對抗式攻擊研究議題興起,使得運用深度學習模型偵測假新聞的辨識正確性備受挑戰。
本研究嘗試透過 TF­IDF、TextRank、KeyBERT 等文字探勘方法,以及測試模型輸出 LogitOut 方法,找到文本中容易受到 TextFooler 擾動的標的,再將找到的關鍵單詞進行同義詞置換生成模擬對抗樣本,透過對抗式訓練的方式強化 BERT 假新聞判別器對於 TextFooler 攻擊的強韌度。實驗結果發現:(1) 文字探勘方法中 KeyBERT 較能找出 TextFooler 攻擊單詞,而模型輸出 LogitOut 又明顯優於文字探勘方法。(2) 關鍵字搜尋方法對於 TextFooler 攻擊單詞命中率越高,越能透過同義詞置換生成模擬對抗範例,並藉由訓練模擬對抗範例後提升 BERT 假新聞判別器對於 TextFooler 對抗式攻擊的強韌度。
In recent years, the research of adversarial attack has emerged, making the fake news detection by using deep learning method challenging again.
In this study, we try to increase the robustness of BERT fake news detector against TextFooler by training simulated adversarial samples. To generate simulated adversarial samples, we use both text mining method such as TF­IDF, TextRank, KeyBERT and method by testing model ouput (LogitOut) combining with synonyms replacement strategy. The experimental results found that (1) KeyBERT is more capable of identifying the attacked subject by TextFooler comparing with other text mining methods, and testing model
output(LogitOut) method is much better than text mining methods. (2) The robustness of BERT fake news detector against TextFooler can be improved after adding the simulated adversarial examples mentioned above.
第一章 緒論 1
第一節 研究背景 1
第二節 研究動機 9
第三節 研究目的 9
第四節 研究問題 10
第二章 文獻探討 12
第一節 針對 TextFooler 攻擊的防守策略 12
第二節 FireBERT 12
第三章 研究方法 14
第一節 研究流程 14
第二節 資料蒐集與建立 BERT 假新聞判別器 15
第三節 TextFooler 對抗範例生成與測試 16
第四節 模擬對抗範例訓練資料生成與 BERT 假新聞判別器優化 16
第四章 研究結果與分析 23
第一節 研究環境 23
第二節 資料蒐集與建立 BERT 假新聞判別器 25
第三節 TextFooler 對抗範例生成與測試 28
第四節 模擬對抗範例訓練資料生成 30
第五節 交叉分析 37
第五章 結論與未來研究 39
第一節 結論 39
第二節 未來研究 40
參考文獻 41
[1] Nic Newman, Richard Fletcher, and David A. L. Levy, et al. digital-news­report­2016. Digital Journalism. https://reutersinstitute.politics.ox.ac.uk/
our-research/digital-news-report-2016, 2016.
[2] Edson C., Tandoc Jr., and Zheng Wei Lim, et al. Defining fake news. Digital Jour-nalism. https://doi.org/10.1080/21670811.2017.1360143, 2018.
[3] Ashish Vaswani, Noam M. Shazeer, and Niki Parmar, et al. Attention is all you need.
arXiv preprint arXiv:1706.03762, 2017.
[4] Jacob Devlin, Ming­Wei Chang, and Kenton Lee, et al. Bert: Pre­training of deep bidirectional transformers for language understanding. arXiv preprint
arXiv:1810.04805, 2019.
[5] Haoming Guo, Tianyi Huan, and Huixuan Huang, et al. Detecting covid­19 conspir-acy theories with transformers and tf­idf. arXiv preprint arXiv:2205.00377, 2022.
[6] Jin Di, Jin Zhijing, and Zhou Joey Tianyi, et al. Is bert really robust? natural language attack on text classification and entailment. arXiv preprint arXiv:1907.11932, 2019.
[7] Shilin Qiu, Qihe Liu, and Shijie Zhou, et al. Adversarial attack and defense tech-nologies in natural language processing: A survey. Neurocomputing, 2022.
[8] Ji Gao, Jack Lanchantin, and Mary Lou Soffa, et al. Black­box generation of adver-sarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and
Privacy Workshops (SPW). IEEE, 2018.
[9] Robin Jia, Percy Liang. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
[10] Zhihong Shao, Zitao Liu, and Jiyong Zhang, et al. Advexpander: Generating natu-ral language adversarial examples by expanding text. IEEE/ACM Transactions on
Audio, Speech, and Language Processing, 2022.
[11] Daniel Matthew Cer, Yinfei Yang, and Sheng­yi Kong, et al. Universal sentence encoder. arXiv preprint arXiv:1803.11175, 2018.
[12] Mein Gunnar, Hartman Kevin, Morris Andrew. Firebert: Hardening bert­based clas-sifiers against adversarial attack. arXiv preprint arXiv:2008.04203, 2020.
[13] Page Lawrence, Brin Sergey, and Motwani Rajeev, et al. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
[14] Mihalcea Rada, Tarau Paul. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, 2004.
[15] Grootendorst, Maarten. Keybert: Minimal keyword extraction with bert. [Internet].
Available: https://maartengr. github. io/KeyBERT/index. html, 2020.
[16] Nikola Mrksic, Diarmuid Ó Séaghdha, and Blaise Thomson, et al. Counter­fitting word vectors to linguistic constraints. In NAACL, 2016.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *