巨量數據所驅動之預測型演算法勃興,AI自主性武器系統相關爭議浮上檯面,成為國際法研究之新課題。為填補國際法研究針對AI新武器崛起涉及之國際規範秩序形塑過程中的空白,本文旨在以國際人道法及特定常規武器公約作為論述憑藉,檢視此新型武器系統適用於現行國際法秩序之應然與實然。本文以文獻研究為本,以「自主性程度之高低」、「智能決策迴路」及「特徵式列舉」作為類型化之區別要素,比較美國、中國、德國、紅十字國際委員會及人權觀察組織對於自主性武器系統的異同之辨,接續考察武器控管之國際法主要淵源《特定常規武器公約》之談判歷程及重要里程碑。本文發現,《特定常規武器公約》締約方所僵持不下的若干倡議,已然無法周延因應自主性武器系統的潛在風險,故本文試圖以貫徹有意義之人為控制和課責機制為探詢思維,提出若干監管軍事演算法之規範性建議。
With the rise of big data-based predictive algorithms, controversies have arisen over the development of AI autonomous weapons systems, which have emerged as a new focal point in the study of international law. This paper examines how these new systems are and ought to be applicable to the international legal order on the basis of international humanitarian law as well as other relevant laws. Focusing on the texts of international law, this paper compares how autonomous weapons system is defined by the U.S. Department of Defense, China, Germany, International Committee of the Red Cross and Human Rights Watch according to the degree of autonomy and intellectual decision making loop. This paper then unpacks the process of negotiations and milestones of the Convention on Certain Conventional Weapons. It demonstrates that several proposals put forth by the contracting parties to the Convention have reached an impasse in negotiations, making it difficult to adequately address the potential risks posed by autonomous weapons systems. Consequently, this paper proposes several regulatory insights regarding the control over military algorithms by implementing meaningful human control and accountability mechanisms.