透過您的圖書館登入
IP:18.222.117.109
  • 學位論文

雷達與RGB攝影機多感測器融合之多物件辨識與追蹤技術在嵌入式系統實現

Radar and RGB Camera Sensor Fusion Technology and its Embedded System Implementation for Object Detection and Tracking

指導教授 : 郭峻因
本文將於2024/08/19開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


本論文結合雷達和RGB攝影機之雙感測器來進行道路上的物件,包含汽車、機車和行人的辨識與追蹤。透過影像辨識來得到物件的種類和追蹤編號,再利用雷達偵測來提供件的實際距離與速度,又因雷達訊號較不受光影和天氣的影響,可以幫助提升攝影機在光線與天候不穩定時的物件辨識與追蹤效果。本論文採用德州儀器的AWR1642 77GHz毫米波雷達收集前方雷達資訊,並採用羅技C920r網路攝影機提供RGB的影像,雷達訊號經過處理後可得到前方的物件與雙感測器裝置的相對距離和相對速度,接著透過聚類演算法找出實際的物件位置,再利用卡爾曼濾波器來進行雷達物件點的預測與追蹤。而透過攝影機所取得的RGB影像部分,利用深度學習模型進行影像物件辨識,再透過卡爾曼濾波器進行影像的物件追蹤。接著把雷達追蹤後的結果點投影到影像上後,再把雷達追蹤點與影像上物件的追蹤點再進行一次追蹤,以此得到更加穩定的追蹤點,之後再透過回饋的方式,把最後得到的追蹤點回饋輸入到下一次的影像追蹤,來以此穩定物件在畫面上的編號。在本論文所設定之測試場景下,所提方法之物件偵測平均準確率達到95% 以上,相較於只採用深度學習影像物件辨識之偵測效果提高15%準確度。本論文所提出之多感測器融合之物件偵測與追蹤方法不只可以在PC上實現,也可以應用在嵌入式系統上,在Nvidia Xavier嵌入式平台上,其運行速度大約為6 FPS,可應用於智慧交通場域之路測單元上,以偵測各方向之移動物件與發出警示訊號,以降低交通事故之發生機率。

並列摘要


This paper combines two sensors, i.e. Radar and RGB camera, to focus on research of Camera/Radar sensor fusion technology for object detection (including cars, motorcycle and pedestrian) and tracking on the road. The proposed design obtains the type and tracking number of the object through the image captured from camera, and then use the radar object detection to provide the actual distance and velocity of the object. Because the radar is less affected by different light intensity and weather conditions, it can help improving the object recognition of the camera under various conditions. In this thesis, Texas Instruments' AWR1642 77GHz millimeter-wave radar is used to collect the front radar information, and the Logitech C920r camera is used to provide RGB images. After the radar signal is processed, the relative distance and relative velocity of the front object and the sensor fusion device can be obtained. Then, the clustering algorithm is used to find the real position of objects. Next, the Kalman filter is used to predict and track the radar points of objects. In the camera process, the deep learning model is used to identify the objects on image, and then the Kalman filter is also used to predict and track the objects on image. After projecting the radar tracking points onto the image, the radar tracking points and the image tracking points are regarded as the input to Track-to-Track system to generate more stable tracking points. Finally, Track-to-Track points are input to the next image tracking to stabilize the label of the objects on image. In the test scenario, the average accuracy of the object detection method is over 95%, which is also 15% higher than only using deep learning model. The proposed sensor fusion method is not only developed on PC but also implemented on the embedded systems. On the platform of Nvidia Xavier, the proposed design can achieve about 6FPS with 77GHz radar input and 640x360 image input.

參考文獻


[1] Chavez-Garcia, Ricardo Omar, and Olivier Aycard. "Multiple sensor fusion and classification for moving object detection and tracking." IEEE Transactions on Intelligent Transportation Systems 17.2 (2015): 525-534.
[2] Cho, Hyunggi, et al. "A multi-sensor fusion system for moving object detection and tracking in urban driving environments." 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014.
[3] Ji, Zhengping, and Danil Prokhorov. "Radar-vision fusion for object classification." 2008 11th International Conference on Information Fusion. IEEE, 2008.
[4] Bombini, Luca, et al. "Radar-vision fusion for vehicle detection." Proceedings of International Workshop on Intelligent Transportation. 2006.
[5] Hou, Zhiqiang, and Chongzhao Han. "A target tracking system based on radar and image fusion." International Conference on Information Fusion, Australia. 2003.

延伸閱讀