透過您的圖書館登入
IP:3.17.141.193
  • 學位論文

惡劣天氣條件下基於關聯注意力機制融合雷達和光達進行物件偵測

Fusion of Radar and LiDAR Using Associative Mechanism for Object Detection in Adverse Weather Conditions

指導教授 : 李明穗

摘要


隨著深度學習技術的不斷發展,物件偵測的準確性也日益提高。自動駕駛 Level 5 的實現已經近在眼前。在良好的天氣條件下,物件偵測的平均精確度可以 高達百分之八十五以上。然而,天氣並非時時都理想,有時候會下雨、起霧,甚 至下雪,這種惡劣天氣會大幅降低物件偵測的準確性。 傳統的感測器,如攝像頭和LiDAR,都容易受到惡劣天氣的影響。因此,我 們採用RADAR和LiDAR的融合來進行物件偵測。RADAR在惡劣環境下不受影響,但會產生許多噪點雲。因此,我們需要使用LiDAR作為輔助,因為LiDAR 能提供精確的環境點雲信息,有助於減少虛擬偵測。我們使用注意力機制來融合LiDAR和RADAR的特徵。同時,我們提出了特 徵選取模塊(Feature Selection Module),解決了注意力機制中關注權重的問題。 此外,我們還提出了關聯融合模塊(Associative Feature Fusion Module),充分利 用注意力機制選取的特徵。通過實驗證明,我們提出的模型優於目前最先進的 RADAR和LiDAR模型。

並列摘要


With the continuous development of deep learning technology, the accuracy of object detection has been steadily improving. The realization of Level 5 autonomous driving is within reach. In favorable weather conditions, the average accuracy of object detection can reach over 85 percent. However, the weather is not always ideal, and conditions such as rain, fog, and even snow can significantly reduce the accuracy of object detection. Traditional sensors like cameras and LiDAR are susceptible to the influence of harsh weather conditions. Therefore, we adopt a fusion of RADAR and LiDAR for object de tection. RADAR is unaffected by adverse environmental conditions but introduces a lot of noisy point clouds. Hence, we utilize LiDAR as an auxiliary sensor because it provides accurate environmental point cloud information, which helps mitigate ghost detection. We employ an attention mechanism to fuse the features from LiDAR and RADAR. Additionally, we propose a Feature Selection Module to address the issue of attention weights in the attention mechanism. Furthermore, we introduce an Associative Feature Fusion Module to fully utilize the selected features from the attention mechanism. Through experiments, we demonstrate that our proposed model outperforms the state-of-the-art RADAR and LiDAR models.

參考文獻


X.Bai, Z.Hu,X.Zhu,Q.Huang,Y.Chen,H.Fu,andC.-L.Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1090 1099, 2022.
D.Barnes,M.Gadd,P.Murcutt,P.Newman,andI.Posner. Theoxfordradarrobotcar dataset: A radar extension to the oxford robotcar dataset. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 6433–6438. IEEE, 2020.
A. Barrera, C. Guindel, J. Beltrán, and F. García. Birdnet+: End-to-end 3d object detection in lidar bird's eye view. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pages 1–6. IEEE, 2020.
H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom. nuscenes: A multimodal dataset for autonomous driv ing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621–11631, 2020.
S.-Y. Chu and M.-S. Lee. Mt-detr: Robust end-to-end multimodal detection with confidence fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5252–5261, 2023.

延伸閱讀