即時定位與地圖構建(SLAM)是一種解決飛行器自我定位問題的通用方法。但是,目前現有的SLAM方法,都有各種各樣的限制,比如在缺少特徵點,有重複圖案的場景中,或是由於相機的動態模糊的影響,都會使得SLAM定位結果缺失或是偏移。因此,爲了剋服僅基於視覺的飛行器自我定位方法的局限,我們提出了一種互補式的定位方法。該方法基於拓展卡爾曼濾波,將IMU作爲額外的傳感器融合進定位過程中,並基於IMU的讀數,來修正視覺定位的結果。同時,該方法將基於特徵點以及直接法這兩種不同分類的SLAM定位方法進行融合,進行互補式定位。實驗表明,我們的方法不僅可以提升視覺定位的結果,同時在視覺定位出現大的失誤時,可以轉換成另一個進行互補式定位,使飛行器的自我定位更穩定。
Visual simultaneous localization and mapping (SLAM) is a common solution for ego-positioning of drone. However, SLAM may sometimes lose tracking due to fast camera motions, featureless or repetitive environments, etc. To overcome the limitations of vision-only SLAM, we propose a new complementary method in this paper, which fuses the visual positioning results with those estimated by an inertial measurement sensor (IMU) based on a loosely-coupled framework, and further combines feature-based SLAM with direct SLAM by underperformance detection to keep advantages of both methods, i.e. not only keeping the performance of feature-based SLAM but also overcoming featureless or motion-blurred issues with direct SLAM. Experiments on both simulated and real datasets show that the proposed method improves the accuracy of current vision-only SLAM and also leads to more robust positioning results with complementary framework.
為了持續優化網站功能與使用者體驗,本網站將Cookies分析技術用於網站營運、分析和個人化服務之目的。
若您繼續瀏覽本網站,即表示您同意本網站使用Cookies。