透過您的圖書館登入
IP:18.189.184.99
  • 期刊

AI可解釋性的法學意義及其實踐

Legal Significance of Explainable AI and Its Practice

摘要


近期資訊科學所謂的「AI的可解釋性(explainability)」有兩個內涵:其一是理解後說明的可解釋性(interpretability),包括主體中心的解釋與模型中心的解釋;其二是透明度(transparency),使用例如分解法或「模型不可知系統」(代理人模型等)之方法達成。另一方面,法學領域對AI的討論中,法規與司法裁判所稱的「要求解釋之權利」則是使用「explanation」一詞,但內涵為何、與資訊科學界的「可解釋性」是否相類,仍有相當爭論。本文認為,在需要較高程度的解釋時(例如公部門的自動化決策時),以透明度底下的方法所為之解釋,可能過度複雜難懂而對被影響之人沒有太大意義,也可能侵害模型製造者之營業秘密。法律毋寧應將重點放在interpretability底下的「主體中心」解釋與「模型中心」解釋二種方法,前者是提供主體關於與自己類似決定的人們的資訊,後者包括訓練資料的概述、模型種類、最重要因素及模型成效等,始符合GDPR第15條的「有意義資訊」。上述解釋不包括各因素的權重或原始程式碼。最後,針對未來可能出現的司法AI,本文以法律資料分析之相關研究為例,說明法律資料的處理及演算過程與可解釋性之關係,裨利法官與律師等使用者適當行使「要求解釋之權利」。

並列摘要


This article attempts to clarify whether or which aspects of the "explainable AI", a research hotspot in the data science community, can meet the "explainability" or "right to explanation" required by the legal domain. First, by analyzing recent research in the data science field regarding "explainable AI", the two connotations of "explainability" are found. One is the interpretation brought out by the researchers after understanding (interpretability). And the second is transparency, which is achieved by using methods such as decomposition to show "explanation producing system". Next, this article turns eyes to discussions related to "explanation" in legal domain. The word "explanation" is often used when regulations and judicial decisions require information related to algorithms. But it is more often seen that, instead of "explanation", adjacent concepts such as information access, disclosure, due process, etc. are used. However, there is still considerable debate on whether regulations such as GDPR can derive the "right to explanation" and what its connotation is. After comparing the idea of "explanation" in both data science and law, this paper argues that, when a higher level of explanation is required (for example, when reviewing public sector decisions), exogenous approaches such as surrogate models developed by the data scientists do not satisfy "meaningful information" defined by law and hence are not legally qualified explanations. The information provided by AI producers should at least include an overview of the training data, the type of model, the most important factors, and the effectiveness of the model. The above information consisting of "production system of interpretation" may comply with the "meaningful information" of Article 15 of the GDPR. On the other hand, the weight of each factor or the source code is not included in the information that should be legally disclosed. Finally, with regard to the judicial AI that may appear in the future, this article takes the relevant research on legal analytics as an example to illustrate the relationship between the processing and explainability, so as to benefit users such as judges and lawyers to properly exercise the "right to explanation".

參考文獻


邵軒磊、黃詩淳(2020),〈新住民相關親權酌定裁判書的文字探勘:對「平等」問題的法實證研究嘗試〉,《臺大法學論叢》,49 卷特刊,頁 1267-1308。https://doi.org/10.6199/NTULJ.202011/SP_49.0001
張永健、何漢葳、李宗憲(2017),〈或重於泰山、或輕於鴻毛:地方法院車禍致死案件撫慰金之實證研究〉,《政大法學評論》,149 期,頁 139-219。https://doi.org/10.3966/102398202017060149003
黃詩淳、邵軒磊(2018),〈酌定子女親權之重要因素:以決策樹方法分析相關裁判〉,《 臺大 法學 論叢 》 , 47 卷 1 期 , 頁 299-344 。https://doi.org/10.6199/NTULJ.201803_47(1).0005
黃詩淳、邵軒磊(2019),〈人工智慧與法律資料分析之方法與應用:以單獨親權酌定裁判的預測模型為例〉,《臺大法學論叢》,48 卷 4 期,頁 2023-2073。https://doi.org/10.6199/NTULJ.201912_48(4).0005
黃詩淳、邵軒磊(2017),〈運用機器學習預測法院裁判:法資訊學之實踐〉,《 月旦 法學 雜誌 》 , 270 期,頁 86-96 。https://doi.org/10.3966/102559312017110270006

延伸閱讀