透過您的圖書館登入
IP:18.116.13.113
  • 學位論文

在可變位元速率無線影像串流之有效物件偵測

Effective Moving Object Detection Over Variable Bit-Rate Wireless Video Streaming

指導教授 : 黃士嘉

摘要


移動物體偵測在自動化視訊監控系統中,是公認中最重要的一個功能。然而, 在不穩定的位元率影像的移動物體偵測是一個難解的問題,會產生不穩定的位元 率影像原因在於即時的視訊影像透過無線網路傳輸時經常會受到網路壅塞 (network congestion)或是不穩定的頻寬(unstable bandwidth)的影響,特別是在嵌入式應用程式,影像串流的位元率突然變化,容易造成移動物體誤判,因此在不穩定的位元率視訊影像中進行移動物體偵測是一個很困難的問題。本論文提出一個基於反傳遞網路類神經網路的移動物體偵測演算法以達到精確且完整的 偵測。此方法包含動態背景建立模組和移動目標擷取模組。首先藉由動態背景 建立模組建立不同位元率的多重背景模型,為了能充分代表不同位元率時的背景 特性。隨後,透過移動目標擷取模組:有效地擷取出不同位元率影像中的移動目 標將此方法的偵測結果和其它知名的方法比較,經過主客觀的分析,結果都顯 示本論文提出的方法有最好的效果。其中,提出的演算法之精確度在公正的評 估數據Similarity 和F1 比現行的方法分別高出最多83.34%和89.71%。

並列摘要


Motion detection plays an important role in video surveillance system. Video communications over wireless networks can easily suffer from network congestion or unstable bandwidth, especially for embedded application. A rate control scheme produces variable bit-rate video streams to match the available network bandwidth. However, effective detection of moving objects in variable bit-rate video streams is a very difficult problem. This paper proposes an advanced approach based on the counter-propagation network through artificial neural networks to achieve effective moving object detection in variable bit-rate video streams. The proposed method is composed of two important modules: a various background generation module and a moving object extraction module. The proposed various background generation module is employed in order to generate the adaptive background model which can express properties of variable bit-rate video streams. After an adaptive background model is generated by using the various background generation module, the proposed moving object extraction module is employed to detect moving objects effectively from both low-quality and high-quality video streams. Lastly, the binary motion detection mask can be generated as the detection result by the output value of the counter-propagation network. In this paper, we compare our method with other state-of-the-art methods. To demonstrate the performance of our proposed method in regard to object extraction, we analyze qualitative and quantitative comparisons in real-world limited bandwidth networks over a wide range of natural video sequences. The overall results show that our proposed method substantially outperforms other state-of-the-art methods by Similarity and F1 accuracy rates of 83.34% and 89.71%, respectively.

參考文獻


[1]Y. Durmus, A. Ozgovde, and C. Ersoy, “Distributed and Online Fair Resource Management in Video Surveillance Sensor Networks,” IEEE Trans. Mobile Computing, vol. 11, no. 5, pp. 835-848, May 2012.
[2]N. Buch, S.A. Velastin, and J. Orwell, “A Review of Computer Vision Techniques for the Analysis of Urban Traffic,” IEEE Trans. Intelligent Transp. Syst., vol. 12, no. 3, pp. 920-939, Sept. 2011.
[3]T.D. Raty, “Survey on Contemporary Remote Surveillance Systems for Public Safety,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 40, no. 5, pp. 493-515, Sept. 2010.
[4]S. Dockstader and M. Tekalp, “Multiple camera tracking of interacting and occluded human motion,” IEEE Proceedings , vol. 89, no. 10, pp. 1441-1455, Oct. 2001.
[5]C. Yuan, G. Medioni, J. Kang, and I. Cohen, “Detecting Motion Regions in the Presence of a Strong Parallax from a Moving Camera by Multiview Geometric Constraints,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, pp. 1627-1641, Sept. 2007.

延伸閱讀