透過您的圖書館登入
IP:216.73.216.153
  • 學位論文

基於非可信任第三方的 Transformer 推理模型安全實作與優化

SITransformer: Secure Inference Transformer without Trusted Third Party

指導教授 : 吳沛遠
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


如今,許多雲服務都建立在機器學習技術的基礎上。然而,大多數雲服務在設計時並未考慮安全因素,這可能導致數據洩漏或模型資產的丟失。為了解決這個問題,我們提出了一個名為 SITransformer 的框架,它在兩方安全計算場景中將便利性與安全性相結合。SITransformer 是 SiRNN 模型的擴展,與現有的 CrypTen 框架相比,具有更好的安全性假設。SITransformer 利用茫然傳輸作為安全通信通道,並通過使用樂觀併發控制和 Seed 的混合方法,提供與 CrpyTen 類似的推理效率。在 SITransformer 中,伺服器和客戶端都以秘密共享的形式持有數據,從而同時保護數據和模型資產。在實驗部分,SITransformer 展示了在沒有可信第三方假設的情境下,該框架的正確性以及樂觀併發控制和 Seed 方法所帶來的效率改進。

並列摘要


Many cloud services are built upon machine learning techniques nowadays. However, most cloud services were not designed with security considerations in mind, which can result in data leaks or loss of model assets. To address this issue, we propose a framework called SITransformer that combines convenience with security in the 2PC scenario. SITransformer is an extension of the SiRNN model and provides better security assumptions compared to the state-of-the-art CrypTen. SITransformer utilizes oblivious transfer as a secure communication channel and offers similar inference efficiency by leveraging a hybrid approach using optimistic concurrency control (OCC) and Seed. In SITransformer, both the server and the client hold data in the form of secret sharing, thereby protecting both the data and the model assets simultaneously. In our experimental section, SITransformer demonstrates the correctness of the framework in a scenario without a Trusted Third Party (TTP), as well as the efficiency improvements brought by OCC and Seed method.

參考文獻


Ayoub Benaissa and Bilal Retiat. Privacy-Preserving Deep Learning Using Homomorphic Encryption. PhD thesis, 2020.
Dan Bogdanov, Sven Laur, and Jan Willemson. Sharemind: A framework for fast privacy-preserving computations. In Computer Security-ESORICS 2008: 13th European Symposium on Research in Computer Security, M ́alaga, Spain, October 6-8, 2008. Proceedings 13, pages 192–206. Springer, 2008.
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Koneˇcn`y, Stefano Mazzocchi, Brendan McMahan, et al. Towards federated learningat scale: System design. Proceedings of machine learning and systems,1:374–388, 2019.
Nishanth Chandran, Divya Gupta, Aseem Rastogi, Rahul Sharma, and Shardul Tripathi. Ezpc: Programmable, efficient, and scalable secure two-party computation for machine learning. Cryptology ePrint Archive, 2017.
D Chaum. The dining cryptographers problem, unconditional sender anonymity. Draft, received May, 13, 1985.

延伸閱讀