單量子位元和多量子位元量子神經網絡的表達能力已被證明具有普遍性(universal),能夠逼近任何函數。然而,使用量子神經網路逼近任意函數通常需要相對較深的電路,這對於近期的量子設備來說可能不切實際。因此,本論文基於前人的工作,設計了一個混合量子神經網絡(hybrid QNN)電路,包括數據重新上傳電路(re-uploading circuit)和測量前饋電路(measurement feed-forward circuit),旨在通過前饋技術減少電路的深度和寬度。該混合電路具有模組化結構,允許靈活調整編碼閘數量、量子位元數量、數據重新上傳次數和前饋層數量。我們首先展示了前饋電路在分類任務中的能力及其在噪聲環境中的可行性。然後,我們將混合 QNN 電路應用於三個不同的分類問題,探索不同的電路結構和分類方法如何影響結果。最後,我們認為在經典神經網路中增加隱藏層和在QNN中增加前饋層有較大的相似性。
The expressibility of both single-qubit and multi-qubit quantum neural networks has been shown to be universal, capable of approximating any function. However, approximating an arbitrary function using a quantum neural network usually requires a relatively deep circuit, which could be impractical for near-term quantum devices. Consequently, this thesis designs a hybrid quantum neural network (QNN) circuit, building on previous work including data re-uploading and measurement feed-forward circuits, aiming to reduce circuit depth and width using feed-forward techniques. The hybrid circuit can be modularized, allowing flexibility in the number of encoding gates, qubit numbers, data re-uploading steps, and feed-forward layers. We first demonstrate the capability of the feed-forward circuit in classification tasks and its feasibility in noisy environments. We then apply the hybrid QNN circuit to three different classification problems, exploring how various circuit structures and classification methods influence the results. Finally, we argue that a closer analogy between increasing hidden layers in classical neural networks and increasing feed-forward layers in QNNs can be drawn.