透過您的圖書館登入
IP:3.88.254.50
  • 學位論文

加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較

A Comparison of Vision-Based Autonomous Navigation for Target Grasping of Humanoid Robot by Enhanced SIFT and Traditional HT Algorithms

指導教授 : 黃志良

摘要


本論文運用兩個單板電腦PICO820和Roborad-100及兩個網路攝影機C905(其可辨識距離約4米),實現加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。其中目標被放置於視覺系統可辨識距離之外(例如,約10米)的未知的三維位置。 首先經由攝影機擷取影像並傳輸至單板電腦PICO820以進行相關影像處理(例如,將彩色影像轉換為灰階影像,以利SIFT之運算並辨識相關之地標),並計算地標中心點之影像座標,將其輸入至事先學習好的類神經網路以獲取其相對應之世界座標,藉此估算人形機器人所在之絕對世界座標,經過與預先規劃的路徑比較後,機器人就可以自主地修正其所在位置,以導正到預先設定的路徑上。經過事先安排的地標,獲取其相關絕對世界座標,並經由特定目標之搜尋,以完成目標抓取之任務。此外,當機器人到達目標附近約12公分後,將目標所估算的世界座標輸入至事先學習好的類神經網路以估算其左右手之馬達角度,以利機器人進行目標抓取的動作。 最常見也最為經典的長廊之視覺導引即是應用Hough Transform (HT)以進行其直線邊緣之偵測,並沿著所偵測的直線導引人形機器人行走。因此也在相同環境中進行加強型SIFT(即以SIFT辨識相關之地標及以類神經網路之三維定位)與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。最後,各以兩個實驗比較相關優劣。

並列摘要


This thesis realizes the humanoid robotic system to execute the target grasping (TG) in the unknown 3-D world coordinate, which is far away from the recognizable distance of vision system or is invisible by the block of building. Suitable landmarks with known 3-D world coordinates are arranged to appropriate locations or learned along the path of experimental environment. Before detecting and recognizing the landmark (LM), the HR is navigated by the pre-planned trajectory to reach the vicinity of arranged LMs. After the recognition of the specific LM via scale-invariant feature transform (SIFT), the corresponding pre-trained multilayer neural network (MLNN) is employed to on-line obtain the relative distance between the HR and the specific LM. Based on the modification of localization through LMs and the target search, the HR can be correctly navigated to the neighbor of the target. Because the inverse kinematics (IK) of two arms is time consuming, another off-line modeling by MLNN is also applied to approximate the transform between the estimated ground truth of target and the joint coordinate of arm. Finally, the comparisons between the so-called enhanced SIFT and traditional Hough transform (HT) for the detection of straight line to navigate the HR the execution of target grasping confirm the effectiveness and efficiency of the proposed method.

參考文獻


[1] W. F. Xie, Z. Li, X. W. Tu and C. Perron, “Switching control of image-based visual servoing with laser pointer in robotic manufacturing systems,” IEEE Trans. Ind. Electron., vol. 56, no. 2, pp. 520-529, Feb. 2009.
[2] M. Ralph and M. A. Moussa, “An integrated system for user-adaptive robotic grasping”, IEEE Trans. Robotics, vol. 26, no. 4, pp. 698-709, Aug. 2010.
[3] Y. Wang, H. Lang, and C. W. de Silva, “A hybrid visual servo controller for robust grasping by wheeled mobile robots,” IEEE Trans. Mechatronics, vol. 15, no. 5, pp. 757-769, Oct. 2010.
[5] E. Ribnick, S. Atev and N. P. Papanikolopoulos, “Estimating 3D positions and velocities of projectiles from monocular views,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 938-944, May 2009.
[6] J. Kassebaum, N. Bulusu and W. C. Feng , “3-D target-based distributed smart camera network localization,” IEEE Trans. Image Processing, vol. 19, no. 10, pp. 2530-2539, 2010.

延伸閱讀