透過您的圖書館登入
IP:18.189.143.114
  • 學位論文

自主機器人對於環境中點特徵與區域特徵的偵測與追蹤

Detection and Tracking of Point and Region Features in Environments for Autonomous Robot Systems

指導教授 : 王銀添

摘要


本論文針對自主機器人認知環境的議題進行研究,使用CMOS機器人視覺系統搭配影像特徵偵測與追蹤演算法,輔助機器人認知週遭環境。第一個階段將偵測自然環境中的點特徵(point features),提供機器人進行定位與建圖之用。根據相同的特徵點進而分辨環境中的通道(path)與障礙物,提供機器人避障機制使用。第二階段將使用尺度不變特徵轉換(SIFT)與加速強健特徵(SURF)兩種方法偵測自然環境中的區域影像特徵(local or region image features),包括特徵的點座標與方向描述,達到直接辨識障礙物的功能,提供機器人定位與避障機制之用。

並列摘要


The aim of this thesis is to conduct the research of cognition of environments for an autonomous robot. A CMOS robot vision system is utilized to capture the image of the environments, and the image processing algorithms for feature detection and tracking are applied to form a mechanism for the robot to cognize the surrounding environment. The research is divided into two stages: in the first stage, the spot characteristics or point features in the natural environment will be detected and tracked. The three dimensional coordinates of the point features are calculated for the robot to carry out the tasks of localization and map building. Furthermore, the point features nearby are gathered into a cluster and treated as an obstacle in the environments, which is distinct from the feasible paths for robot motion. In the second stage, the methods of Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF) are employed to recognize and track the region phantom characteristic or region image features, including coordinates and directional descriptor of the interest point. The purpose of the second research stage is to construct the capability of recognition and avoiding of obstacles simultaneously for the robot system.

參考文獻


[22] 黃鈴凱,以手勢辨識進行人類與機器人之間的非言語互動,碩士論文,淡江大學機械與機電工程學系,民國九十六年六月。
[2] Bay, H., A. Ess, T. Tuytelaars, L. Van Gool, 2008, SURF: speeded up robust features, Computer Vision and Image Understanding, vol.110, pp.346-359.
[5] Baumberg, A., 2000, Reliable feature matching across widely separated views, Proceedings Computer Vision and Pattern Recognition, pp.774-781.
[6] Chang, W.-C., 2007, Precise Positioning of Binocular Eye-to-Hand Robotic Manipulators, Journal of Intelligent and Robotic Systems, vol.49, pp.219-236.
[7] Davison, A.J., I.D. Reid, N.D. Molton, and O. Stasse, 2007, MonoSLAM: Real-Time Single Camera SLAM, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.29, no.6, pp.1052-1067.

被引用紀錄


陳義智(2008)。具備全方位視覺之全方位機器人的自我定位方法與發展平台〔碩士論文,淡江大學〕。華藝線上圖書館。https://doi.org/10.6846/TKU.2008.00711
王人蔚(2008)。具單眼視覺的小型人型機器人之自然特徵式同時定位與建圖〔碩士論文,淡江大學〕。華藝線上圖書館。https://doi.org/10.6846/TKU.2008.00564

延伸閱讀