這篇論文主要是在自走車上使用尺度不變(SIFT)的辨識方法,該尺度不變特徵使用貝氏機率找出影像的特徵向量,,進行影像定位時,在目前所在環境中擷取一張待測影像,透過尺度不變特徵的擷取與描述,可以找到該影像中所有興趣點可能出現的環境區域之機率,在進而預測整張待測影像可能位置的機率。 進行導航時,在目前位置擷取四張影像做定位之預測,並判斷其中何者較接近目標便往該方向前進,以此作為導航之依據,在本文中提出以下四種不同的實驗情況,第一種情況下的實驗是沒有任何障礙的自走車移動,第二種情況下的實驗設置靜態障礙物讓自走車輛辨識,第三種情況下的實驗設置動態障礙物讓自走車輛辨識,第四種情況下是在不同場景下的辨識狀況,實驗結果證明,在不同視角與不同光照的情況下仍有不錯的定位效果。
The paper presents an autonomous vehicle by Scale-invariant feature transformation (SIFT) method. The Scale-Invariant Features is a formation based on Bayesian probability to create the eigenvectors of image. In this paper, we propose the following two different experimental cases. The first case experimental has no obstacle for autonomous vehicles motion. The second case experimental has a static obstacle for autonomous vehicles motion. The third case experimental has a dynamic for autonomous vehicles motion. The fourth case experimental has Different Storey for autonomous vehicles motion. Accord to there eigenvectors the autonomous vehicle can be localized for the position and directions to the object. When vision-based localizing, we grab one query image in the present environment. Proceed to localization. Next, by the detection and description of scale-invariant features, we can find the area probability of all interest points in that image to further predict the position probability of whole query image. When navigating, we grab four query images in current position. Next, advance to this direction if it has high probability for next target position. The experimental results conclude excellent localization at different angle of view and illumination and carry on the navigation of the indoor environment in a situation that has occlusion.
為了持續優化網站功能與使用者體驗,本網站將Cookies分析技術用於網站營運、分析和個人化服務之目的。
若您繼續瀏覽本網站,即表示您同意本網站使用Cookies。