透過您的圖書館登入
IP:13.59.192.254
  • 學位論文

基於基因演算法於限制詢問次數下的遷移式黑箱對抗攻擊

Transfer-based Black-box Adversarial Attack under Limited Query with Genetic Algorithms

指導教授 : 于天立

摘要


深度學習在近年廣泛的被應用在多個領域,特別是影像分類上有諸多研究與應用,然而對圖片上添加人類難以察覺的擾動卻有可能會誤導高精確度的分類模型。對抗式攻擊用於尋找這些針對模型的對抗例,以強化模型分類器抵抗擾動的能力,目標是避免可能損害模型精確度的擾動造成模型分類錯誤。在對抗式攻擊中,黑盒攻擊一直缺少實際的應用情景。本篇論文設想了一個真實世界中的黑盒攻擊應用情景,並且推估此情景中可能因環境變化而產生的擾動狀況,以此推導對擾動強度的限制,以及證明此類擾動會降低模型的精確度。論文同時在此攻擊情景下提出了一個新的黑盒對抗式攻擊: 基於基因演算法的遷移式黑箱對抗攻擊。此攻擊結合了基因演算法的優點,並基於對抗式攻擊的可遷移性,以替代模型降低了隨之而來的大量詢問次數。研究同時表明在潛在空間中的擾動例分佈有助於攻擊演算法更快的找到對抗例。

並列摘要


Deep learning is widely used in many applications in the real world, especially image classification, but some hardly perceptible perturbations may mislead the high-accuracy classifiers. The adversarial attacks are used to find out the adversarial examples for improving the robustness of the classifier in recent years, but the black-box adversarial attacks lack a real-world scenario to apply. This thesis introduces a real-world scenario for black-box adversarial attacks. Under this scenario, the upper limit of the degree of perturbation is estimated from the real-world distortion, and it is proved that this distortion will impair the performance of the classifier. This thesis also proposes a black-box adversarial attack corresponding to the scenario: Transfer-based black-box adversarial attack with genetic algorithms. This attack takes advantage of genetic algorithms and reduces the followed query costs with a surrogate model based on the transferability of adversarial attacks. This research shows that the distribution of perturbed vectors in the latent space can help the attack algorithm find adversarial examples.

參考文獻


[1] A. Abdelhamed, S. Lin, and M. S. Brown. A high-quality denoising dataset for
smartphone cameras. In IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), June 2018.
[2] M. Alzantot, Y. Sharma, S. Chakraborty, H. Zhang, C.-J. Hsieh, and M. B.
Srivastava. Genattack: Practical black-box attacks with gradient-free opti-

延伸閱讀