科技的進步使得人類越來越難以預測人工智慧(俗稱AI)的行為和產出。當人工智慧出錯時,確定由誰來承擔責任也變得困難。由於人工智慧的設計者和使用者越來越無法預測人工智慧的反應,因此當人工智慧產品發生糾紛時,常造成許多難解的法律問題。近年來,人工智慧在人類生活中的廣泛使用,使生活變得便利,也產生了許多學者與專家過去未曾預見的問題。本研究以人工智慧在法律上產生的實務案例為基礎進行探討,說明人工智慧目前在法律上所產生的權利和義務問題,並針對人工智慧在這些案例中應承擔的責任進行討論。此外,我們也探討賦予人工智慧法人格的可能性,透過實務案例的討論,說明賦予人工智慧法人格成為解決相關問題的可行方案。最後,本研究也說明人工智慧法人格的可行架構,並對本研究的未來方向進行討論與建議。
The advancement of technology has made it increasingly difficult for humans to understand the behaviors and outputs of artificial intelligence (AI), presenting major obstacles in assigning responsibility when AI make a mistake. As more and more AI developers and users are unable to predict AI's behaviors determining liability often becomes difficult when disputes arise. AI has been widely used in human life in recent years, making life more convenient but also leading to many unforeseen issues from the perspective of legal scholars and experts. This study discusses practical cases involving AI legal problems, elaborating on various issues related to rights and obligations arising from AI in the legal context, and deliberates on the responsibilities and obligations that AI should assume in these cases. Furthermore, the possibility of granting legal personality to AI is discussed and analyzed through doctrine discussion and practical case analysis, demonstrating that providing AI with legal personality could be a solution to relevant real-life cases. Finally, an explanation of the framework of AI legal personality is provided, and the future research of this study is discussed and recommended.