透過您的圖書館登入
IP:18.227.13.249
  • 學位論文

使用傳統或深度學習特徵於單一影像攝影機定位法之效能

Performance on Single­-Shot Camera Localization Using Handcrafted or Deep­-Learning Features

指導教授 : 洪一平
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


近年來攝影機自我定位在很多方面都有產業化的發展,比如機器人和無人駕駛車需要視覺定位來估計其位置,由此,自我定位技術之重要性可想而知。視覺定位其中最普遍的一個方法就是基於影像特徵,此篇論文就是比較傳統特徵和深度學習特徵運用在單一影像法之定位準確度之影響,並且本次實驗所選用的單一影像法是基於影像檢索。論文中會選用兩種經典的傳統特徵提取方法以及五種最近幾年比較熱門的深度學習特徵提取方法,實驗的數據集包含季節變化和照明變化(天氣變化)的影像,在不同精確範圍下比較定位準確度,分析產生性能優劣之可能性,並討論各種方法的優缺點。這將為後續的影像定位研究提供思路與改善方向,尤其是在針對具有照明變化的定位研究。

並列摘要


In recent years, camera ego-positioning has been industrialized in many aspects. For example, robots and unmanned vehicles need visual positioning to estimate their position. Therefore, the importance of ego-positioning technology can be imagined. One of the most common methods of ego-positioning is based on image features. This paper compares the performance of traditional features and deep-learning features on the localization accuracy of a single-shot localization method, and the single-shot localization method used in this experiment is based on image retrieval . In the paper, two classic traditional feature extraction methods and five deep-learning feature extraction methods that have been popular in recent years will be selected. The experimental datasets contain images of seasonal changes and lighting changes(weather changes). The localization accuracy is compared under different accuracy ranges. Analyze the possibility of performance pros and cons, and discuss the pros and cons of various methods. This will provide ideas and improvement directions for subsequent image localization research, especially for localization research with lighting changes.

參考文獻


[1]  Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self­ supervised interest point detection and description. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 224–236, 2018.
[2]  Yuki Ono, Eduard Trulls, Pascal Fua, and Kwang Moo Yi. Lf­net: Learning local features from images. arXiv preprint arXiv:1805.09662, 2018.
[3]  MihaiDusmanu,IgnacioRocco,TomasPajdla,MarcPollefeys,JosefSivic,Akihiko Torii, and Torsten Sattler. D2­net: A trainable cnn for joint description and detection of local features. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 8092–8101, 2019.
[4]  JeromeRevaud,PhilippeWeinzaepfel,CésarDeSouza,NoePion,GabrielaCsurka, Yohann Cabon, and Martin Humenberger. R2d2: repeatable and reliable detector and descriptor. arXiv preprint arXiv:1906.06195, 2019.
[5]  Yurun Tian, Bin Fan, and Fuchao Wu. L2­net: Deep learning of discriminative patch descriptor in euclidean space. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 661–669, 2017.

延伸閱讀