透過您的圖書館登入
IP:54.144.81.21
  • 期刊

Road Marking Extraction and Classification from Mobile LiDAR Point Clouds Derived Imagery Using Transfer Learning

應用轉移學習從移動式光達點雲影像中萃取並分類路面標記

摘要


High Definition (HD) Maps are highly accurate 3D maps that contain features on or nearby the road that assist with navigation in Autonomous Vehicles (AVs). One of the main challenges when making such maps is the automatic extraction and classification of road markings from mobile mapping data. In this paper, a methodology is proposed to use transfer learning to extract and classify road markings from mobile LiDAR. The data procedure includes preprocessing, training, class extraction and accuracy assessment. Initially, point clouds were filtered and converted to intensity-based images using several grid-cell sizes. Then, it was manually annotated and split to create the training and testing datasets. The training dataset has undergone augmentation before serving as input for evaluating multiple openly available pre-trained neural network models. The models were then applied to the testing dataset and assessed based on their precision, recall, and F1 scores for extraction as well as their error rates for classification. Further processing generated classified point clouds and polygonal vector shapefiles. The results indicate that the best model is the pre-trained U-Net model trained from the intensity-based images with a 5 cm resolution among the other models and training sets that were used. It was able to achieve F1 scores that are comparable with recent work and error rates that are below 15%. However, the classification results are still around two to four times greater than those of recent work and as such, it is recommended to separate the extraction and classification procedures, having a step in between to remove misclassifications.

並列摘要


高精地圖是輔助自動駕駛車所需的高精度3D地圖,目前應用移動式測繪資料自動化測製高精地圖仍是挑戰,本文提出應用轉移學習(Transfer Learning)從移動式光達點雲自動萃取並分類道路標記的方法,其資料處理流程包括前處理、訓練、萃取分類、及精度評估,前處理是先過濾非路面點雲再將點雲轉換為網格式的強度值影像。訓練過程是從選取的訓練資料進行手動註釋和拆分,建立訓練和測試數據集,訓練數據集可採既有的公開資料庫,再利用現有訓練資料擴充。之後運用訓練好的機器學習模型從光達強度影像中萃取分類路面標記,然後以人工判讀的成果為參考評估測試成果精度,先評估萃取的正確度、錯誤率、及F1指標,進而評估分類的誤差率,最後將分類的點雲向量化。結果顯示,以5cm解析度的光達強度影像來預訓練U-Net模型最好。基於F1指標低且誤差率低於15%,驗證所提方法可成功萃取並分類道路標記,其測試成效與最近發表的論文成果相當。然而,所提方法之萃取完整度優於所比較的方法,但分類精度則不如所比較的方法,主要原因是本研究同時進行萃取及分類,而比較的方法則先萃取,進而濾除雜訊點群後再進行分類。建議未來研究可將萃取和分類過程分開,增加濾除機制,以降低分類誤差率。

並列關鍵字

移動光達 道路標記 萃取 分類 轉移學習

延伸閱讀