透過您的圖書館登入
IP:216.73.216.156
  • 學位論文

基於搜索方法且帶有位置感知運動模型的追蹤器

A Search-Based Tracker with Position-Aware Motion Model

指導教授 : 莊永裕
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


在這篇論文中我們聚焦在解決多目標追蹤與實例分割問題。目前多數優秀的 解決方案如 PointTrack 多專注於尋找更好的表面特徵表示方法。除了表面特徵外, 在進行物件關聯時,物體運動也是同樣不可忽略的線索。結合兩者,我們應用動 態物件感知網路結合由卡爾曼濾波器所預測的位置資訊。透過預測的物體運動給 予提示,我們的動態搜索器即可以在下一個幀數畫面定位該物體位置。我們的追 蹤器 SearchTrack 對於表面特徵與物體運動給予完整的結合應用。我們提出了一個 多任務聯合追蹤器,它的速度快、相當直觀且相比現今的方法都還要更加精準, 無論在二維的多目標追蹤問題以及多目標追蹤與實例分割問題。我們的方法在 KITTI MOTS 的排行榜上達到 71.2/57.6 HOTA 分別在汽車與行人類別上。此外, 在 MOT17 我們也達到了 53.1 HOTA.

並列摘要


In this paper we focus on dealing with multiple-object tracking and segmentation(MOTS) problem. Top-performing work such as PointTrack focus on finding better representation for the appearance identities. In addition to object appearance, object motion is another key element that should be reckoned with while doing association. Instead of choose one of the them, we employ dynamic instance-aware networks combined with coordinate map given by Kalman filter. With the hint of predicted motion, our dynamic searcher could lo- calize the associated object in next frame. Our work, SearchTrack, show a comprehensive thought of appearance and motion. We present a joint tasks tracker that is fast, straight- forward, and more accurate than the state-of-the-art online method, in both 2D MOTS and MOT. It achieves 71.2 HOTA (car) and 57.6 HOTA (pedestrian) on KITTI MOTS leaderboard. Also, on MOT17, it achieves 53.1 HOTA.

參考文獻


[1] P. Bergmann, T. Meinhardt, and L. Leal-Taixe. Tracking without bells and whistles. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Oct 2019.
[2] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft. Simple online and realtime tracking. In 2016 IEEE International Conference on Image Processing (ICIP), pages 3464–3468, 2016.
[3] E. Bochinski, V. Eiselein, and T. Sikora. High-speed tracking-by-detection with- out using image information. In International Workshop on Traffic and Street Surveillance for Safety and Security at IEEE AVSS 2017, Lecce, Italy, Aug. 2017.
[4] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolu- tional networks, 2017.
[5] C. Feichtenhofer, A. Pinz, and A. Zisserman. Detect to track and track to detect, 2018.

延伸閱讀