透過您的圖書館登入
IP:3.145.151.141
  • 學位論文

基於尺度不變特徵轉換與顯著圖於室內環境中尋物機器人視覺系統

Robot vision for Object Finding in Domestic Environment based on SIFT and Modified Saliency Map

指導教授 : 吳世弘
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


本篇論文將提出一個機器人視覺系統,目的是要完成在現實空間中尋找某一特定物件。將以顯著圖與尺度不變特徵轉換的方式提供畫面中物件的資訊,並以空間域的Probability map提供機器人關於物件的所在位置資訊,使機器人可以在空間中行走逐漸逼近物體所在位置,最終目的為找到物件的所在位置。除以前人所發表之顯著圖建構方式外,本論文將以訓練的概念,以分析特定物件的特徵代入顯著圖的建構公式中,最後於實驗階段驗證所提出演算法之效能,並進行分析。而另外也於現實環境拍攝一組資料集供機器人室內空間尋找物品的可重現實驗環境,記錄機器人於空間中行走所看到的所有可能視野。並使用資料集比較不同的更新策略讓機器人在室內空間移動並找到物體所需花費的行走成本。

並列摘要


This paper present a robot vision system based on saliency map and SIFT that the robot can use to find the object in a domestic environment. Traditional saliency map always enhances all object’s weight, it’s not helpful when the robot is looking for a specific object in a mess environment. So we introduced the training concept to the saliency map construction. The system analyzes the specific object before building the saliency map. The modified saliency map enhances only the specific object. The robot vision system uses the result of saliency map and SIFT matching with training object to update the probability map. According to the probability map, the robot can move to the object gradually. In experiments we will report the performance of our method, and compare the different ways of updating tactics. To repeat the experiment in the same environment, we build a data sets of a domestic environment, the data sets include pictures of each view that robot can see.

並列關鍵字

Robot vision Saliency map Probability map

參考文獻


[1] R. Achanta, F. Estrada, P. wils and S.SAusstrunk. “Salient Region Detection and Segmentation”. Lecture Notes in Computer Science, Volume 5008, 66-75, 2008.
[2] J. J. Bonaiuto, L. Itti. “Combining attention and recognition for rapid scene analysis”. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition(CVPR’05), , San Diego, CA, USA, June 20–26, 2005.
[3] Y. Deng and B. S. Manjunath. “Unsupervised segmentation of color-texture regions in images and video”. IEEE Transactionson Pattern Analysis and Machine Learning Volume 23, Issue 8, 800–810, 2001.
[5] S. Feng, D. Xu, X. Yang. “Attention-driven salient edge(s) and region(s) extraction with application to CBIR”. The journal of Signal Processing, Volume 90, Issue 1, 1–15, 2010.
[6] L. Itti and C. Koch, E. Niebur. “A model of saliency-based visual-attention for rapid scene analysis”. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 20, No. 11, 1254–1259, 1998.

延伸閱讀