透過您的圖書館登入
IP:3.149.233.72
  • 學位論文

實現任意聯立方程式之量子演算法及其在機器學習上之應用

Implement quantum algorithm for solving general system of linear equations and its applications to machine learning

指導教授 : 管希聖

摘要


自1994年Peter Shor提出量子演算法可有效解決質因數分解問題後,衝擊現代密碼學之安全性,量子演算法也開始大量被研究。而在解聯立方程式的問題中,古典電腦需要多項式時間才能解決。在2009年,有學者提出該問題的量子版本有對數時間內的量子演算法(HHL algorithm),而2018年也有其改良版本問世(WZP algorithm)。這個發明可以應用在機器學習演算法,如線性回歸、支援向量機等等。本篇論文探討如何有效率地在實際量子電腦上執行任何聯立方程式量子演算法,包含HHL及WZP兩種,以及古典資訊必須轉換為量子資訊的演算法,並利用IBM公司提供的量子計算模擬器執行。模擬的結果顯示,使用三個精準位元來解二元一次聯立方程式,即可達到99%以上的精準度。這個關鍵在於我的電路設計可以在不影響複雜度的情況下大幅提升準確性。此外,我也在IBM的實體量子電腦執行該演算法。

並列摘要


Quantum algorithms have attracted much attention since the quantum factoring algorithm proposed by Shor in 1994 which promised to factor large semiprime integer, threating current cryptographic or encryption systems. For solving the problem of a system of linear equations, it takes polynomial time on classical computer. However, a quantum algorithm known as the HHL algorithm published in 2009 showed that it takes only logarithmic time to solve the quantum version of the problem for sparse matrices. Furthermore, the improved version, the so-called WZP algorithm published in 2018 generalize the problem to the dense matrices. These breakthroughs can be applied to machine learning algorithms, such as linear regression, support vector machine, etc. In this thesis, we discuss how to efficiently execute quantum linear system algorithms, including the HHL algorithm, and the WZP algorithm, on quantum simulators and a practical quantum computer provided by IBM. We also show how to transform the classical information to quantum state that can be manipulated in quantumn computer. The simulation result for solving a two variable system of linear equations shows that using three accuracy qubits, the state delity can be 99%. The key point is that we can greatly increase the accuracy under the same complexity condition. Besides, we implement the algorithm on a real IBMQ quantum computer processor.

參考文獻


[1] R. H. Myers and R. H. Myers, Classical and modern regression with applications, vol. 2 (Duxbury press Belmont, CA, 1990).
[2] J. A. Suykens and J. Vandewalle, Neural processing letters 9, 293 (1999).
[3] P. Resnick and H. R. Varian, Communications of the ACM 40, 56 (1997).
[4] G. Strang, The American Mathematical Monthly 100, 848 (1993).
[5] J. R. Shewchuk et al., An introduction to the conjugate gradient method without the agonizing pain (1994).

延伸閱讀