本文將微軟所推出之體感攝影機Kinect裝載於輪式自走車上,利用其感測器所得到之視覺與深度資訊,作為自走車在路徑追隨時周遭的環境資訊。實驗的第一階段為學習過程,使用者推動自走車至預設位置上拍攝影像,目標視覺路徑是由多個目標位置之影像所構成。下一階段為建製資料庫,使用SURF演算法在目標影像中擷取強健之特徵點,經由使用者選取較多特徵點之區域,將其特徵點與深度資訊記錄在資料庫中,以節省導航時特徵點匹配之時間。最後在導航階段時,將自走車置於路徑之起點,先使用SURF演算法截取Kinect所拍攝的影像特徵點並與資料庫中的特徵點作匹配,再利用類神經網絡中的ART-2分群法保留正確匹配之特徵點對,最後配合Kinect所量測之深度資訊導入坐標轉換系統,推導出目前導航車與目標位置的距離,進而控制自走車兩輪之輪速,完成導航車之路徑追隨。最後,將利用實驗來驗證坐標轉換之結果,並於室內空間規劃一路徑,實現自走車之路徑追隨系統。
This paper proposes a novel approach that uses Kinect upon the wheeled mobile robot to capture grey and depth image as the information for route following system of indoor environment. The system only needs Kinect and two sonar sensors to get the information around the mobile robot. In the first learning process, the robot was guided by user and took a photograph at the desired position along the route. The next process, we apply the speeded-up robust features (SURF) algorithm to the desired images in order to extract the features and its corresponding depth data. Then the final navigation process could match the former data with the features also extracted by SURF of the current image which also photographed from Kinect. According to the matching result, we apply ART-2 algorithm to keep the right matched features. Then we use the depth data, which belong to their right matched features, into the coordinate transformation to obtain the distance and the orientation error between the pose of navigation and learning process. Experiment is presented by three kinds of route to proof our approach in indoor environments.