在資料勘測中,QR 分解是重要的矩陣分解方法之一。而 QR 分解中,細長矩陣是較難處理的一個特例。雖然已經有一些針對細長矩陣的演算法,像是 TSQR 或是 Cholesky QR,但是這些方法都不適合在圖形處理器上加速作運算。因為圖形處理器的記憶體空間有限,而且中央處理器和圖形處理器之間的資料傳送又十分耗時,所以一個好的圖形處理器演算法不只要能夠平行運算,還要符合其記憶體空間的限制,並且盡可能減少資料傳輸。因此,本論文提出了一個方法結合了TSQR 和 Cholesky QR,使得我們的方法更適合在圖形處理器上運算。此外,本論文也發現了在 TSQR 中大部分的子矩陣都會有雙三角的特殊結構,因此使用吉文氏旋轉代替豪斯霍爾德反射可以有效的減少計算量。最後我們比較了我們的方法和 TSQR 在圖形處理器上的效能,證明我們的方法不僅是在理論上有更好的表現,在實作上也有更好的結果。
The QR decomposition is one of the fundamental matrix decompositions in data mining. A particular challenging case of QR decomposition is the tall-and-skinny matrix, which is a matrix with much more rows than columns. Tall-skinny QR has lots of applications such as Krylov subspace methods and some subspace projection methods for linear systems. Furthermore, tall-skinny QR can accelerate the process of principle component analysis (PCA). Although algorithms like TSQR and Cholesky QR have been proposed for computing QR decompositions on tall-and-skinny matrices, none of these al-gorithms are suitable for being applied in the general-purpose graphics pro-cessing unit (GPGPU), which has been increasingly used nowadays. In the view of the limited memory in GPGPU and also the costly data transmission between CPU and GPGPU, we propose a novel R-initiated TSQR to make computing tall-and-skinny QR on the GPGPU viable. Explicitly, our method is unique in that it utilizes Givens QR to fully take advantage of the existence of dual-triangular (DT) structure in submatrices in TSQR to significantly re-duce the computation required. With the R-initiated method, our method can not only meet the memory limitation of GPGPU but also avoid large amount of data transmission. The performance comparison showed that our method outperforms the original TSQR in both theoretical performance and imple-mentation.