簡易檢索 / 詳目顯示

研究生: 洪慶豪
Hung, Ching-Hao
論文名稱: 基於視覺分析之路徑預測與危險分析系統
Potential Trajectory Prediction and Risk Assessment for Complex Scenarios
指導教授: 陳灯能
Chen, Deng-Neng
許志仲
Hsu, Chih-Chung
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理系所
Department of Management Information Systems
畢業學年度: 109
語文別: 中文
論文頁數: 68
中文關鍵詞: 自駕車自動駕駛路徑預測危險分析
外文關鍵詞: self-driving, autonomous driving, trajectory prediction, risk assessment
DOI URL: http://doi.org/10.6346/NPUST202100336
相關次數: 點閱:21下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統
  • 在自駕車的發展過程中,包含著許多核心技術,其中被為關注的點在於其安全性,著重於自身的安全駕駛,以及對於其他人的防禦性駕駛,而在不同的國家,有著不同的地形、場景等差異,因此沒有一套通用的危險預測模型,只能對於每個地區的場景來做調整,但其根本的模型則能溝通用,原因是差別是在於景物的不同而不是行為的不同,而根據資料集的不同來調整應用場景,即可應用在不同地區,因此本論文的目的在於建置出一套通用的模型,核心想法是抓取共同的危險駕駛的行為特徵,因此藉由深度學習的技術,能夠讓電腦能夠自動去學習、抓取何種特徵屬於危險駕駛的共同危險因素。本論文以台灣作為驗證場域,針對台灣的地形以及交通生態模式發展一套系統可藉由這些數據讓電腦能夠學習潛在於這些資料中的共同危險駕駛因素。為了達到此目的,本論文引入了物件偵測,先將影像中相關的移動物件如人車擷取出來;接著我們引入Social-GAN並結合本論文擷取出來的交互移動線索 (Object Contextual Feature, OCF) 來強化周圍車輛未來路徑的預測效能。最後,本論文引入未來意外車禍的預測模組,並藉由本論文所發展的特徵模組,增強最終車禍預測準確度,以達到本計畫的防禦性駕駛行為之目的。實驗結果顯示,本論文所提出之方法有穩定且較佳之效能,針對將會發生事故之物件能夠有高準確度的預測結果。

    In the development process of self-driving cars, it contains many core technologies. The most concerning point is its safety, focusing on its safe driving and defensive driving for others. In different countries, there are different terrains. Scenes and other differences, so there is no set of universal risk prediction models, which can only be adjusted for each region's scenes. However, the basic model can be used for communication, because the difference lies in the difference in the scenery rather than the behavior difference. The application scenarios can be adjusted according to the different data sets to be applied in different regions. Therefore, the purpose of this paper is to build a set of general models. The core idea is to capture common dangerous driving behaviors, which can be used in deep learning. The technology allows the computer to learn automatically; grabbing other characteristics is dangerous driving.

    This paper uses Taiwan as the verification field, and develops a system for Taiwan's terrain and transportation ecological model. Using these data, the computer can learn the common dangerous driving factors that are potentially in the data. In order to achieve this goal, this paper introduces object detection, first extracting relevant moving objects such as people and vehicles in the image; then we introduce Social-GAN and combine the interactive movement clues extracted from this paper (Object Contextual Feature, OCF) to enhance the prediction efficiency of the future path of surrounding vehicles. Finally, this paper introduces a prediction module for future accidents, and uses the feature modules developed in this paper to enhance the accuracy of the final car accident prediction to achieve the goal of the defensive driving behavior of the project. The experimental results show that the method proposed in this paper has stable and better performance, and can have high-precision prediction results for objects that will have accidents.

    摘要 I
    Abstract II
    謝誌 IV
    目錄 V
    圖目錄 VIII
    表目錄 X
    第一章、緒論 1
    1.1研究背景與動機 1
    1.2研究目的與方法 6
    1.3研究範圍及限制 9
    1.4論文架構 10
    第二章、文獻探討與技術背景 11
    2.1 自駕車發展歷程 11
    2.1.1自駕車分級 11
    2.1.2 法規與安全性 13
    2.1.3 自駕車發展 13
    2.2 機器學習 (Machine Learning) 15
    2.2.1 神經網路(Neural Network, NN) 17
    2.2.2 長短期記憶模型 (Long Short-Term Memory, LSTM) 18
    2.2.3注意力模型(Attention Model) 20
    2.3 物件偵測(You Only Look Once, YOLO) 21
    2.4 生成對抗網路 (Generative Adversarial Network, GAN) 26
    2.4.1 Social GAN 29
    2.5 車禍意外偵測 32
    第三章、研究架構與方法 35
    3.1 模型架構 36
    3.1.1 系統流程圖 36
    3.1.2 路徑預測模型(Trajectory Prediction) 38
    3.1.3 危險分析模型(Risk Assessment) 40
    3.2 資料集 44
    第四章、實驗結果 49
    4.1 實驗環境 49
    4.1.1 路徑預測模型實驗環境 49
    4.1.2 危險分析模型之實驗環境 50
    4.2 評估指標 51
    4.2.1 路徑預測模型的指標 52
    4.2.2 危險分析模型的指標 53
    4.3 實驗參數與結果 53
    4.3.1 路徑預測模型之實驗參數設定與實驗結果 53
    4.3.2 危險分析模型之實驗參數設定與實驗結果 55
    4.4效能評比 58
    4.5消融研究(Ablation Study.) 59
    第五章、結論 64
    5.1 主要貢獻 64
    5.2 結論 65
    參考文獻 66

    [1] J. Lutin, A. L. Kornhause, and E. Lerner-Lam, "The revolutionary development of self-driving vehicles and implications for the transportation engineering profession," in Institute of Transportation Engineers (ITE) Journal, vol. 83, 2013, pp. 28-32.
    [2] C. Urmson, et al., "Autonomous driving in urban environments: boss and the urban challenge." Journal of Field Robotics (JFR), vol. 25, no. 8, 2008, pp. 425–466.
    [3] G. Nirschl, "Human-centered development of advanced driver assistance systems," in Human–Computer Interaction (HCI), vol. 4558, 2007, pp. 1088–1097.
    [4] W Fenghui, et al. "One-Dimensional cellular automaton traffic flow model based on defensive driving strategy," in International Journal of Crashworthiness, 2020.
    [5] P. Lai, C. Dow and Y. Chang, "Rapid-response framework for defensive driving based on internet of vehicles using message-oriented middleware," in IEEE Access, vol. 6, 2018 pp. 18548-18560.
    [6] J. Redmon, and A. Farhadi, "Yolov3: an incremental improvement," arXiv preprint arXiv: 1804.02767, 2018.
    [7] A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese and A. Alahi, "Social gan: socially acceptable trajectories with generative adversarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2018, pp. 2255–2264.
    [8] T. Zhao, and X. Wu, "Pyramid feature attention network for saliency detection," in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2019, pp. 3080-3089.
    [9] F.-H. Chan, Y. T. Chen, Y. Xiang, and M. Sun, "Anticipating accidents in dashcam videos," in Asian Conference on Computer Vision (ACCV) , vol. 10114, 2016, pp. 136–153.
    [10] SAE International (2018, June 15). Taxonomy And Definitions For Terms Related To Driving Automation Systems For On-Road Motor Vehicles (3rd ed.)[Online], Available:
    https://www.sae.org/standards/content/j3016_201806/.
    [11] Q. Zhang, X. J. Yang, and L. P. Robert, "Expectations and trust in automated vehicles," in Human–Computer Interaction (HCI), 2020, pp. 1–9.
    [12] Y. Lecun, E. Cosatto, J. Ben, U. Muller, and B. Flepp, "Dave: autonomous off-road vehicle control using end-to-end learning," Courant Institute/CBLL, http://www.cs.nyu.edu/\~ yann/research/dave/index.html, DARPA-IPTO Final Report, 2004.
    [13] M. Bojarski et al., "End to end learning for self-driving cars," arXiv preprint arXiv:1604.07316, 2016.
    [14] R. Threlfall, "2020 autonomous vehicles readiness index," Klynveld Peat Marwick Goerdeler, KPMG, 2020.
    [15] P. Cunningham, M. Cord, and S. J. Delany, "Supervised learning," in machine learning techniques for multimedia, M. Cord and P. Cunningham, Eds. Springer, 2008, pp. 21–49.
    [16] A. Radford, L. Metz, and S. Chintala, "Unsupervised Representation learning with deep convolutional generative adversarial networks," arXiv preprint arXiv:1511.06434, 2015.
    [17] O. Chapelle et al. "Semi-supervised learning," MA, USA: MIT Press, 1st edition, 2010.
    [18] R. S. Sutton and A. G. Barto, "Reinforcement learning: an introduction. cambridge," MA, USA: MIT Press, 2018.
    [19] S. Hochreiter and J. Schmidhuber, "Long short-term memory," in Neural Computation, vol. 9, no. 8, 1997, pp. 1735-1780.
    [20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2016, pp. 779-788.
    [21] I. J. Goodfellow et al., "Generative adversarial networks," [J]. Advances in Neural Information Processing Systems (NIPS), 2014, pp. 2672-2680.
    [22] J.-Y. Zhu, T. Park, P. Isola and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017 , pp. 2223-2232.
    [23] C. Ledig et al., "Photo-realistic single image super-resolution using a generative adversarial network," in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2017, pp. 105–114.
    [24] C. Olah, "Understanding LSTM networks," 2015. [Online]. Available: https://colah.github.io/posts/2015-08-Understanding-LSTMs/.
    [25] T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of gans for improved quality, stability, and variation," arXiv preprint arXiv:1710.10196, 2017.
    [26] J. Liang, L. Jiang, J. C. Niebles, A. G. Hauptmann and L. Fei-Fei, "Peeking into the future: predicting future person activities and locations in videos," in Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2019, pp. 5718-5727.
    [27] R. Langari, "Autonomous vehicles," in Asian Conference on Computer Vision (ACCV) , 2017, pp. 4018–4022.

    下載圖示
    QR CODE