為了增進機器人在三維視覺辨識的精確度,本論文提出了一個基於混合區域性特徵與全域性驗證的三維辨識系統。在實作上,我們修改了現有的方法;以及重新組織使其成為性率更佳的系統流程。此外,我們釋出了適合的參數設定;可應用於Kinect V1/V2 攝影機,以及多組三維點雲測試模型。在本論文所提出的方法中,系統架構可分為三部分。前處理包含: 背景濾除、雜訊處理、點雲資料的化簡,以確保點雲資料的品質。在辨識的步驟中,使用SHOT 特徵子以及Hough Voting。預期可以完成物體的辨識,並輸出姿態估計的結果且以ICP 修正。最後,將修改過後的全域性驗證方法Hypothesis Verification 導入本系統流程中,可驗證輸出結果是否為真;並將錯誤的結果濾除。
To the best of increasing robotic vision in 3D conceptual for recognizing this living world, this thesis proposed a 3D recognition system by combining the local feature and global verification technique. To approach this, we modified the state-ofart methods and organized it as a robust hybrid flow. Another contribution to this thesis, we release the finest parameters to the Kinect sensor as well as the dataset. In the proposed framework, we expect the pre-process can deal with range filtering, noise reduction, and point cloud refinement. After this, the captured point cloud is more reliable and better to describe the object surface. The Second part is focused on recognition and pose estimation. We here refer two robust methods, SHOT descriptor and Hough Voting, one for the local feature generation and the other contributes to the object alignment. Finally, through the ICP to refine the pose matrix, we remove the false positive while verifying the good instance. Moreover, we design a keypoint selective mechanism after the hypothesis verification stage back into local conception.