透過您的圖書館登入
IP:13.58.121.214
  • 期刊

人工智慧在公共政策領域應用的非意圖歧視:系統性文獻綜述

Unintentional Discrimination in Application of Artificial Intelligence to Public Policies: A Systematic Article Review

摘要


本研究從米勒的多元正義觀出發,基於公民聯合關係中的平等原則,檢視人工智慧(AI)在公共政策領域應用所引發的倫理問題。本研究採質性後設分析法,依照PRISMA模式篩選學術研究論文,從中梳理AI在先進國家政策領域應用時的制度過程與結果。研究發現,AI已應用於刑事司法、警察執法、醫療照護、國土安全與國境管理、教育、國家財政、公共就業、國防等八大領域,雖為政府部門帶來行政效率並提升整體民眾福祉,但同時也對特定群體造成非意圖歧視。從制度過程來看,政府部門忽略了用於機器學習的大數據中潛藏著長久以來的社會不正義,而從制度結果來看,歷史中的不正義透過AI繼續複製,導致特定群體遭受差別待遇,基本人權遭受剝奪。為了分析各領域中非意圖歧視的樣態與問題本質,本研究以國際人權相關公約所隱含的人權保障優先順序,從「被歧視者是否主動接受評量」與「消極與積極權利的剝奪」兩個面向分析AI對特定群體造成的負面影響。分析結果顯示,AI在警察執法、刑事司法、與醫療照護三大領域的應用涉及生命權與自由權等消極權利的剝奪,確實有優先處理的急迫性。本文於結論處討論何以非意圖歧視的矯正無法依賴公民社會的自覺,而必須由政府部門積極干預,並從AI應用的籌備階段與執行階段,建議政府應有的具體作為,以降低非意圖歧視對特定群體帶來的人權危害。

並列摘要


This study examined the ethical problems with the application of AI to public policy spheres, based on the principle of equality in citizenship from Miller's plural view of justice. In adopting the PRISMA model, a qualitative meta-analysis was employed to inspect institutional process and outcomes of AI applications. This research found that AI has been applied to various public policy fields including criminal justice, policing, health care, homeland security and border management, education, public finance, public employment, as well as national defense. In these fields, AI has made administrative work more efficient and has improved most people's well-being while creating unintentional discrimination against specific groups of people. An examination of the institutional process showed that the government has ignored the long-standing social injustice hidden in the big data used for machine learning. Consequently, the institutional outcome showed that historical injustice continues to be reproduced through AI, leading to differential treatment of specific groups and deprivation of their basic human rights. In order to analyze the pattern and nature of unintentional discrimination in various public policy areas, this study, based on the order of priority of human rights protection implied by international human rights-related conventions, analyzes the negative effects of AI on specific groups in terms of "whether the victims initiate the evaluation" and "negative and positive rights deprivation". The research results showed that the application of AI in the areas of police enforcement, criminal justice, and health care involves the deprivation of negative rights such as the right to life and the right to freedom, which urgently needs to be addressed. This paper concludes by discussing why the correction of unintentional discrimination cannot be done by civil society but requires the active intervention of the government. This paper ends by suggesting specific actions that the government should take in the preparatory and implementation stages of AI applications in order to reduce the unintentional discrimination of specific groups.

參考文獻


Webb, M. (2019). The Impact of Artificial Intelligence on the Labor Market. Available at SSRN Electronic Journal. Retrieved from http://dx.doi.org/10.2139/ssrn.3482150.
Hellman, D. (2020). Sex, Causation, and Algorithms: How Equal Protection Prohibits Compounding Prior Injustice. Washington University Law Review, 98, 481-523.
Horowitz, M. C. (2016). Public Opinion and the Politics of the Killer Robots Debate. Research & Politics, 3(1).
Howard, A., & J. Borenstein (2018). The Ugly Truth About Ourselves and Our Robot Creations: The Problem of Bias and Social Inequity. Science and engineering ethics, 24(5), 1521-1536.
Huang, H., K. C. Kim, M. M. Young, & J. B. Bullock (2021). A Matter of Perspective: Differential Evaluations of Artificial Intelligence between Managers and Staff in an Experimental Simulation. Asia Pacific Journal of Public Administration, 44(1), 47-65.

被引用紀錄


顏子棋、胡亞平、邱紹群(2023)。警政單位AI科技執法深度與廣度研究管理資訊計算12(2),1-17。https://doi.org/10.6285/MIC.202309_12(2).0001

延伸閱讀