透過您的圖書館登入
IP:3.17.150.163
  • 學位論文

基於類神經網路之適用於動態場景的高精確度物件偵測

Neural Network Based High-Precision Moving Object Detection for Dynamic Scenes

指導教授 : 黃士嘉
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


移動物體偵測在自動化視訊監控系統中,是第一個需要被發展的功能,也是公認中最重要的一個功能。然而,在動態場景中的移動物體偵測仍是一個難解的問題。原因在於動態場景中的背景(例如:搖晃的樹、噴水池等)和移動物體皆處於移動狀態,容易造成誤判。本論文提出一個基於輻狀基底函數類神經網路的移動物體偵測演算法以達到精確且完整的偵測。此方法包含多重背景建立模組和移動物體偵測模組兩個模組。首先,為了能夠充分代表不論是靜態或動態背景的特性,多重背景建立模組利用一個非監督式的學習程序建立一個具彈性的多重背景模型。接下來,透過兩階段的偵測程序:區塊警示和物體擷取;移動物體偵測模組只在可能含有移動物體的區塊運作以達到精確且完整的偵測。將此方法的偵測結果和其它知名的方法比較,經過主客觀的分析,結果都顯示本論文提出的方法有最好的效果。其中,提出的演算法之精確度在公正的評估數據Similarity和F1比現行的方法分別高出最多82.93%和87.25%。

並列摘要


Motion detection, the process which segments moving objects in video streams, is the first critical process of the automatic video surveillance system. However, the accuracy of this significant process is usually reduced by the dynamic scenes, which are commonly encountered in both indoor and outdoor situations. In this thesis, the accurate motion detection is achieved by the proposed method based on a radial basis function neural network. Our method involves a multi-background generation module and a moving object detection module. In the first module, the flexible multi-background model is generated by an unsupervised learning process to fulfill the property of either dynamic or static backgrounds. Next, the moving object detection module achieves complete and accurate detection of moving objects by only processing blocks which are highly possible containing moving objects. This is accomplished by processing two procedures: block alarm procedure and object extraction procedure. The detection results of our proposed method were compared with other state-of-the-art methods through qualitative visual inspection and quantitative estimation. The overall results show that the proposed method substantially outperforms existing methods by Similarity and F1 accuracy rates of up to 82.93% and 87.25%, respectively.

參考文獻


[46] C. Y. Chen, T. M. Lin, and W. H. Wolf, “A visible/infrared fusion algorithm for distributed smart cameras,” IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 4, pp. 514-525, Aug. 2008.
[1] S. Dockstader and M. Tekalp, “Multiple camera tracking of interacting and occluded human motion,” Proc. IEEE, vol. 89, no. 10, pp. 1441-1455, Oct. 2001.
[2] S. Park and J. Aggarwal, “A hierarchical bayesian network for event recognition of human actions and interactions,” Multimedia Syst., vol. 10, no. 2, pp. 164-179, Aug. 2004.
[3] D. Koller, K. Daniilidis, and H. Nagel, “Model-based object tracking in monocular image sequences of road trafc scenes,” Int. J. Comput. Vis., vol. 10, no. 3, pp. 257-281, Jun. 1993.
[4] Z. Zhu, G. Xu, B. Yang, D. Shi, and X. Lin, “Visatram: A real-time vision system for automatic trafc monitoring,” Image Vis. Comput.,vol. 18, no. 10, pp. 781-794, Jul. 2000.

延伸閱讀