實時相機姿態追蹤對於機器人,幾何感知增強現實和三維場景重建等領域都是至關重要的。由於低成本的Kinect這類的深度感測器的問世,可以讓使用者手持和移動相機來輕鬆的捕獲周圍環境中物體的深度圖像,這就有了新的機會來利用帶有雜訊的深度圖像追蹤準確的相機姿態。為了實現穩健的姿態估計,最先進的解決方案採用了幀對模型的姿態追蹤方式,這必将不可避免地伴隨著高存儲器帶寬和計算強度要求,限制了其在移動設備中的應用。 在本論文中,我們提出了一個穩健的相機追蹤方法,它基於一種幀對幀的追蹤演算法以及幀對模型的校正技術。以增加少量計算複雜度開銷作為代價,讓原本會快速積累姿態估計誤差的趨勢得到抑制。為了使所提出的演算法更加高效和實用,我們進一步開發了一個實時相機追蹤系統,其中包括一個姿態追蹤的硬體加速器和可與之平行計算的基於中央处理器的姿態校正單元。我們為姿勢追蹤器特別設計了硬體架構,使之可以在25 兆赫的工作頻率下實現每秒30.77 幀的處理速度。在真實世界資料上的測試實驗結果表明,我們的相機追蹤系統在性能方面與那些基於圖形處理器的高功耗解決方案相當,並且遠好於其他計算複雜度較低 的方法。
Real-time camera pose tracking is essential for robotics, geometry-aware augmented reality and 3D scene reconstruction. Because of the introduction of low-cost Kinect-like depth sensors that enables users holding and moving a camera to easily capture the depth images of objects in the environment, there is a new opportunity to track accurate camera pose with noisy depth images. To achieve robust pose estimation, the state-of-the art solutions conduct the frame-to-model tracking schemes that must inevitably accompany the high memory bandwidth and computationally intensity requirements, which limits its applications in mobile devices. In this thesis, we present a robust camera tracking method that is based on a frame-to-frame tracking algorithm combined with a frame-to-model correcting technique. The tendency toward rapid accumulation of pose estimation errors is suppressed with a small amount of computational complexity overhead. In order to make the proposed algorithm more efficient and applicable, we further develop a real-time camera tracking system, which consists of a hardware pose tracking accelerator and a parallel computing CPU based pose correction unit. The hardware architecture for our pose tracker is specifically designed to achieve a processing speed of 30.77 fps at an operating frequency of 25 MHz. Experiments on real-world datasets demonstrate that the performance of our tracking system is comparable with high-power consumption GPU based solutions, and much better than other methods with low computational complexity.