透過您的圖書館登入
IP:3.135.209.180
  • 學位論文

運用群眾外包生成之解釋於加強大眾對仇恨迷因理解與意識之研究

Enhancing People’s Understanding and Awareness of Hateful Memes Using Crowdsourced Explanations

指導教授 : 許永真

摘要


網絡迷因作為文化的單元,透過社交網站迅速傳播。雖然大多數迷因是為了幽默而創作的,但其中有些迷因具有仇恨元素,並藉此來攻擊他人。儘管由人工智能對仇恨迷因的自動檢測方法已經大量出現,但我們認為高品質的解釋有助於觸發對這種有害信息的免疫。本研究提出了一種產生解釋的新方法,幫助消弭仇恨迷因的文化差距,並提出了兩項用戶研究。受先前研究的啟發,我們提出了一個三階段的眾包工作流程,以引導群眾工作者生成、註釋和修改對仇恨迷因的解釋。為了確保解釋的品質,設計了一個由四項指標組成的自我評估標準,分別是目標、清晰度、明確性和有效性。研究一試圖評估所提出的工作流程。結果顯示,三階段工作流程引導群眾工作者產生了比基線工作流程更好的解釋。研究二探討了不同類型的解釋如何影響用戶的感知。來自127名參與者的實驗結果表明,與基線或機器生成的解釋相比,沒有背景文化知識的人在看到用多階段工作流程生成的解釋時,對仇恨迷因有明顯的感知和理解的提升。

並列摘要


Internet memes as the units of culture are spread rapidly through social media websites. While most memes are created for humor, some of them have hateful meaning and bring the attack to people. While AI-enabled automatic detection of hateful memes has proliferated, we argue that quality explanations help trigger immunization against such harmful information. This work proposes a new approach to generating explanations that help bridge the cultural gap in understanding hateful memes and presents two user studies. Inspired by prior research, a three-stage crowdsourcing workflow is proposed to guide crowd workers to generate, annotate and revise explanations of hateful memes. To ensure the quality of explanations, a self-assessment rubric is designed to evaluate the explanations using four criteria: target, clarity, explicitness, and utility. Study 1 attempts to evaluate the proposed workflow in an online study with 66 participants, compared to baseline. The results showed that the three-stage workflow guided crowds to generate higher quality explanations than the explanations generated by baseline. Study 2 explores how different types of explanations affect user perception. The experimental results from 127 participants suggested that people without prior cultural knowledge gained significant perceived understanding and awareness of hateful memes when presented with explanations generated with the multi-stage workflow as opposed to baseline or machine-generated explanations.

參考文獻


[1] A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–18, New York, NY, USA, 2018. Association for Computing Machinery.
[2] A. Adadi and M. Berrada. Peeking inside the black-box: A survey on explainable artificial intelligence (xai). IEEE Access, 6:52138–52160, 2018.
[3] T. H. Afridi, A. Alam, M. N. Khan, J. Khan, and Y.-K. Lee. A multimodal memes classification: A survey and open research issues, 2020.
[4] A. Alqaraawi, M. Schuessler, P. Weiß, E. Costanza, and N. Berthouze. Evaluating saliency map explanations for convolutional neural networks: A user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces, IUI ’20, pages 275–285, New York, NY, USA, 2020. Association for Computing Machinery.
[5] F. H. Baider, S. Assimakopoulos, and S. L. Millar. Hate speech in the eu and the contact project. Online Hate Speech in the European Union: A Discourse-Analytic Perspective, eds S. Assimakopoulos, FH Baider, and S. Millar (Cham: Springer), pages 1–6, 2017.

延伸閱讀