本論文運用兩個單板電腦PICO820和Roborad-100及兩個網路攝影機C905(其可辨識距離約4米),實現加強型SIFT與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。其中目標被放置於視覺系統可辨識距離之外(例如,約10米)的未知的三維位置。 首先經由攝影機擷取影像並傳輸至單板電腦PICO820以進行相關影像處理(例如,將彩色影像轉換為灰階影像,以利SIFT之運算並辨識相關之地標),並計算地標中心點之影像座標,將其輸入至事先學習好的類神經網路以獲取其相對應之世界座標,藉此估算人形機器人所在之絕對世界座標,經過與預先規劃的路徑比較後,機器人就可以自主地修正其所在位置,以導正到預先設定的路徑上。經過事先安排的地標,獲取其相關絕對世界座標,並經由特定目標之搜尋,以完成目標抓取之任務。此外,當機器人到達目標附近約12公分後,將目標所估算的世界座標輸入至事先學習好的類神經網路以估算其左右手之馬達角度,以利機器人進行目標抓取的動作。 最常見也最為經典的長廊之視覺導引即是應用Hough Transform (HT)以進行其直線邊緣之偵測,並沿著所偵測的直線導引人形機器人行走。因此也在相同環境中進行加強型SIFT(即以SIFT辨識相關之地標及以類神經網路之三維定位)與傳統型Hough Transform於人形機器人視覺自動導引的目標抓取之比較。最後,各以兩個實驗比較相關優劣。
This thesis realizes the humanoid robotic system to execute the target grasping (TG) in the unknown 3-D world coordinate, which is far away from the recognizable distance of vision system or is invisible by the block of building. Suitable landmarks with known 3-D world coordinates are arranged to appropriate locations or learned along the path of experimental environment. Before detecting and recognizing the landmark (LM), the HR is navigated by the pre-planned trajectory to reach the vicinity of arranged LMs. After the recognition of the specific LM via scale-invariant feature transform (SIFT), the corresponding pre-trained multilayer neural network (MLNN) is employed to on-line obtain the relative distance between the HR and the specific LM. Based on the modification of localization through LMs and the target search, the HR can be correctly navigated to the neighbor of the target. Because the inverse kinematics (IK) of two arms is time consuming, another off-line modeling by MLNN is also applied to approximate the transform between the estimated ground truth of target and the joint coordinate of arm. Finally, the comparisons between the so-called enhanced SIFT and traditional Hough transform (HT) for the detection of straight line to navigate the HR the execution of target grasping confirm the effectiveness and efficiency of the proposed method.
為了持續優化網站功能與使用者體驗,本網站將Cookies分析技術用於網站營運、分析和個人化服務之目的。
若您繼續瀏覽本網站,即表示您同意本網站使用Cookies。