透過您的圖書館登入
IP:18.224.38.3
  • 期刊

AI倫理的兩面性初探-人類研發AI的倫理與AI倫理

A Preliminary Study of AI Ethical Duality: AI Ethics and Ethical AIs

摘要


目前AI國際倫理準則在規範方向上偏向AI研發及使用者,比較缺乏針對AI系統本身,但AI的自主學習能力,使得AI的倫理規範應該涵蓋這兩個面向,本文提出較為含括的哲學倫理觀來說明。這項倫理觀包含了AI的自主道德決策能力,如依據其目前及可預期的技術基礎,如與人類相較雖屬於知性與低限度的,但在決策的責任承擔上仍具備分散式行為主體性。最後針對AI兩面向的倫理規範,我們提出底線式道德義務作為所有規範的起點。並對於落實AI規範的議題,提出個案研發的倫理評估機制設立方向,以為回應。

並列摘要


AI (artificial intelligence) is characterized by autonomous learning and reactive capabilities. Arguably, these characteristics mean AIs can be held liable for failing to adhere to ethical, legal and social regulations when undertaking autonomous acts. In this paper, we consider the duality of AI ethics: that AI ethics should work both for humans and Artificial Intelligences. We lay out a more inclusive ethical view which would accommodate the idea of AIs characterized by distributive agency, which posits that AIs possess only a minimal degree of autonomy or intelligent autonomy in virtue of which AIs can be given a certain agency status. The agency status of AI will differ from that of human beings, governments, and corporations, nonetheless, the inclusive view also suggests that AIs ought to be held accountable for their acts: shared responsibility. We also suggest an idea of a bottom-line view of moral obligations as the starting point for the duality of AI ethics. Finally, we suggest an implementation mechanism of AI ethics based on case evaluations.

參考文獻


張忠宏 (2015)。〈道德內在論的磁性〉,《國立政治大學哲學學報》,34: 1-68. 取自http://thinkphil.nccu.edu.tw/files/archive/163_ b70dc71f.pdf (Chang, C. H. [2015]. The magnetism of moral in-ternalism. NCCU Philosophical Journal, 34: 1-68.)
1348_7af3b4ef.pdf (Hsu, H. [2018]. Principles, Situationssituations, and the Nnormativity of Moralitymorality. Journal of Social Sciences and Philosophy, 30, 3: 1-35.)
陳小平 (2019)。〈人工智能倫理體系:基礎架構與關鍵問題〉,《智能系統學報》,14, 4: 605-610. (Chen, X. P. [2019]. Ethical system of artificial intelligence: Infrastructure and key issues. CAAI Transac-tions on Intelligent Systems, 14, 4: 605-610.) https://doi.org/ 10.11992/tis.201906037
Arkoudas, K., & Bringsjord, S. (2014). Philosophical foundations. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 34-63). Cambridge, UK: Cambridge University Press.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature, 563, 7729: 59-64. https://doi.org/10.1038/s41586-018-0637-6

被引用紀錄


甘偵蓉(2023)。為何應該以人工智能強化倫理衝突的緊急決策?資訊社會研究(45),19-50。https://doi.org/10.29843/JCCIS.202307_(45).0002

延伸閱讀