透過您的圖書館登入
IP:3.14.247.77
  • 學位論文

安全聚合非唯一所需: 論聯盟式學習法下以雜訊容忍度防禦隱私攻擊

Secure Aggregation Is Not All You Need: Mitigating Privacy Attacks with Noise Tolerance in Federated Learning

指導教授 : 歐陽彥正

摘要


聯盟式學習 (Federated Learning) 是一種協作方法,目標在於創建 AI 模型的同 時 保 護 數 據 隱 私 。 當 前 聯 盟 式 學 習 法 往 往 極 度 依 賴 安 全 聚 合 協 議 (Secure Aggregation) 來保護數據隱私。然而,在某種程度上,此類協議必須假定編排聯盟式學習過程的個人或組織(亦即伺服器)不是完全惡意或不誠實的。我們在審查當伺服器為完全惡意的並試圖獲取私人、潛在敏感數據的訪問權限時,安全聚合是否有漏洞。此外,我們提供一種進一步防禦此類惡意伺服器的方法,並且 展示了在聯盟式學習的情況下,此方法對於防禦重建數據的攻擊是有效的。

並列摘要


Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.

參考文獻


[1] Martin Abadi et al. “Deep Learning with Differential Privacy”. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (Oct. 2016). DOI : 10.1145/2976749.2978318. URL : http://dx.doi.org/10.1145/2976749.2978318.
[2] Jackson Abascal et al. “Is the Classical GMW Paradigm Practical? The Case of Non-Interactive Actively Secure 2PC”. In: Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: Association for Computing Machinery, 2020, pp. 1591–1605. ISBN : 9781450370899. URL : https://doi.org/10.1145/3372297.3423366.
[3] Alankrita Aggarwal, Mamta Mittal, and Gopi Battineni. “Generative adversarial network: An overview of theory and applications”. In: International Journal of Information Management Data Insights 1.1 (2021), p. 100004. ISSN : 2667-0968.2020.100004. DOI : https://doi.org/10.1016/j.jjimei. URL : https://www.sciencedirect.com/science/article/pii/S2667096820300045.
[4] Mahmoud Assran et al. “Stochastic Gradient Push for Distributed Deep Learning”. In: CoRR abs/1811.10792 (2018). arXiv: 1811.10792. URL : http://arxiv.org/abs/1811.10792.
[5] Anish Athalye, Nicholas Carlini, and David A. Wagner. “Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples”. In: CoRR abs/1802.00420 (2018). arXiv: 1802.00420. URL : http://arxiv.org/abs/1802.00420.

延伸閱讀