透過您的圖書館登入
IP:18.189.3.137
  • 學位論文

最大池化層卷積神經網路之對抗例攻擊精準度下界之凸鬆弛分析

Provable verification on maxpool-based CNN via convex outer bound

指導教授 : 吳沛遠
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


近年來深度神經網路(DNN)在影像分類上有著卓越的表現,然而神經網路對於含有惡意雜訊的影像—對抗例(adversarial example)是相當脆弱的。於此帶動關於如何訓練出更穩固的神經網路(certify training)以防禦對抗例攻擊之研究,並延伸出另一個相關的子議題—驗證神經網路在受到對抗例攻擊下,其分類精準度下界預測(verification)。針對神經網路精準度下界預測,早期研究多關注於分析較簡單架構的神經網路,如全連接層神經網路(fully-connected network)。然而近年流行之影像分類 的架構,以含有最大池化層(maxpool)的卷積神經網路(CNN)為主,諸如 AlexNet、LeNet、VGG 等。近年雖有研究針對此類神經網路架構做驗證,如 DeepZ、DeepPoly、RefinePoly 等,但此類研究所預測的下界(verified bound)通常與真實神經網路被攻擊下的精準度相差甚遠、抑或需耗費大量驗證時間。在本論文中我們提出一新穎的作法處理含有池化層的卷積神經網路,透過將池化層拆解為多個線性整流函數(ReLU),並結合凸外界(convex relaxation)技術與對偶理論,進一步以對偶網 (dual network)分析最大池化層卷積神經網路之對抗例攻擊精準度下界。實驗結果顯示在圖片分類問題上,相對於過去的研究(DeepZ、DeepPoly、RefinePoly),我們提出的作法只需花費相對較少的驗證時間,即可得到一更精準的下界(tighter bound)預測。

並列摘要


In the past few years, deep neural networks have reached unprecedented performance in the task of image classification, generation, segmentation. However, these networks are vulnerable to malicious modification of the pixels in input images, known as adversarial examples. Accordingly, previous research proposed the concept of building neural networks provably robust to adversarial examples, known as certifying training. This idea has also been extended to a sub-related work -- verification, which verifies the robustness properties of neural networks. The pioneers in this field focused on verifying networks with simple architecture (e.g., fully connected); however, the majority of network structures for the image classification (e.g., AlexNet, LeNet, VGG) contain maxpool-based CNNs architectures. In this work, we improve the verified bound for maxpool-based CNNs under bounded norm adversarial perturbations. Through decomposing maxpool function as a series of ReLU functions, we extend the convex relaxation trick to maxpool functions, by which the verified bound can be efficiently computed through a dual network. Experimental results demonstrate that our work is capable of yielding state-of-the-arts verification precision for maxpool-based CNNs with significantly less computation cost compared to renowned verification methods such as DeepZ, DeepPoly, and RefinePoly.

參考文獻


N. Akhtar, J. Liu, and A. Mian. Defense against universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3389–3398, 2018.
S. Boyd, S. P. Boyd, and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
R. Bunel, A. De Palma, A. Desmaison, K. Dvijotham, P. Kohli, P. Torr, and M. P. Kumar. Lagrangian decomposition for neural network verification. In Conference on Uncertainty in Artificial Intelligence, pages 370–379. PMLR, 2020.
R. Bunel, I. Turkaslan, P. H. Torr, P. Kohli, and M. P. Kumar. Piecewise linear neural networks verification: A comparative study. 2018.
N. Carlini, G. Katz, C. Barrett, and D. L. Dill. Groundtruth adversarial examples.2018.

延伸閱讀