聯盟式學習 (Federated Learning) 是一種協作方法,目標在於創建 AI 模型的同 時 保 護 數 據 隱 私 。 當 前 聯 盟 式 學 習 法 往 往 極 度 依 賴 安 全 聚 合 協 議 (Secure Aggregation) 來保護數據隱私。然而,在某種程度上,此類協議必須假定編排聯盟式學習過程的個人或組織(亦即伺服器)不是完全惡意或不誠實的。我們在審查當伺服器為完全惡意的並試圖獲取私人、潛在敏感數據的訪問權限時,安全聚合是否有漏洞。此外,我們提供一種進一步防禦此類惡意伺服器的方法,並且 展示了在聯盟式學習的情況下,此方法對於防禦重建數據的攻擊是有效的。
Federated learning is a collaborative method that aims to preserve data privacy while creating AI models. Current approaches to federated learning tend to rely heavily on secure aggregation protocols to preserve data privacy. However, to some degree, such protocols assume that the entity orchestrating the federated learning process (i.e., the server) is not fully malicious or dishonest. We investigate vulnerabilities to secure aggregation that could arise if the server is fully malicious and attempts to obtain access to private, potentially sensitive data. Furthermore, we provide a method to further defend against such a malicious server, and demonstrate effectiveness against known attacks that reconstruct data in a federated learning setting.