透過您的圖書館登入
IP:18.222.125.171
  • 學位論文

可解釋性人工智慧的研究架構及書目計量學分析

Explainable Artificial Intelligence: Research Framework and A Bibliometric Analysis

指導教授 : 梁定澎 彭志宏

摘要


近年來隨著人工智慧領域的發展,由於深度學習的黑盒模型(Black-box Model)造成預測結果難以理解,使得在技術、法律、經濟及社會等層面上造成人工智慧發展的瓶頸。因此能否從不透明的黑盒模型中找出可被解釋的決策關鍵成為一個至關重要且迫切的研究方向,即所謂的可解釋性人工智慧(eXplainable Artificial Intelligence, XAI)。 然而目前學術上對於可解釋性人工智慧的相關研究處於初步發展階段,較缺乏完整性的脈絡以及綜觀性的統整。因此本研究主要目的為針對可解釋性人工智慧的相關研究主題進行過去發表文獻的彙總與分析,整理目前研究的發展現況及釐清現存問題,並提出可供未來研究人員參考的研究架構。本研究透過Web of Science文獻資料庫平台蒐集學術上現有的可解釋性人工智慧相關文獻,並採用書目計量學(Bibliometric Analysis)的書目分析方法,搭配VOSViewer書目計量學輔助分析軟體,將文獻進行量化以及視覺化的分析,並彙整學術上重要的文獻。同時針對可解釋性人工智慧的相關技術及評估方法進行架構性的統整,提供技術層面的基本認識以促使相關研究發展。最後彙整目前可解釋性人工智慧研究的相關問題及發展限制,提供未來研究人員的發展方向。

並列摘要


Recently, Artificial Intelligence (AI) and deep learning are popular in predictive modeling and decision making, but the process of producing results are not transparent and sometimes hard to understand. This becomes a bottleneck for adopting artificial intelligence from technical, legal, economic, and social aspects. Hence, making AI decisions explainable from the opaque black-box model has become an important and imperative research direction, which is called eXplainable Artificial Intelligence (XAI). A number of papers related to XAI have been published in different areas, but the issue of explainability involves many different issues that make it hard to have a complete profile for researchers with interests in entering the area. The purpose of this research is to conduct a bibliometric analysis to provide a comprehensive overview of explainable artificial intelligence literature. Published literature are identified, sorted out, and clarified to build a research framework that can be used to guide researchers. Based on our findings, future research issues and constraints of the explainable artificial intelligence are identified. The findings of this research shed much light on understanding the current status and future directions of XAI.

參考文獻


[1] Arras, L., Horn, F., Montavon, G., Müller, K. R., & Samek, W. (2017). " What is relevant in a text document?": An interpretable machine learning approach. PloS one, 12(8), e0181142.
[2] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
[3] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
[4] Biran & C. Cotton. (2017). Explanation and justification in machine learning: A survey. In IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Vol. 8, No. 1, pp. 8-13
[5] Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015, August). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721-1730).

延伸閱讀