帳號:guest(3.144.1.156)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):范炎方
作者(外文):Fan, Yan-Fang
論文名稱(中文):適用於移動平台之移動物體偵測技術
論文名稱(外文):Moving Object Detection from a Moving Platform
指導教授(中文):彭明輝
指導教授(外文):Perng, Ming-Hwei
學位類別:碩士
校院名稱:國立清華大學
系所名稱:動力機械工程學系
學號:9633554
出版年(民國):98
畢業學年度:97
語文別:中文
論文頁數:78
中文關鍵詞:移動物體偵測移動平台
外文關鍵詞:moving object detectionmoving platform
相關次數:
  • 推薦推薦:0
  • 點閱點閱:187
  • 評分評分:*****
  • 下載下載:9
  • 收藏收藏:0
過去的移動物體偵測技術大都用於大樓監控系統(surveillance system),由於使用靜止相機,即靜止背景在一段時間內都在畫面中呈現固定不動,所以這樣的監控系統可以輕易的建立背景模型,再利用背景相減法(background subtraction),即可扣除畫面中不變的靜止背景區,未被減去的殘餘區塊即為移動物體所在區域。
當前述監控系統所使用之演算法用於盲人導航、汽車防撞等具有自我移動的裝置系統時,相機拍攝的一連串畫面將因為像機具有自我運動,而導致靜止背景在畫面中亦會隨著相機的自我運動而改變,靜止物體與移動物體在畫面中將變得難以區分。許多既有文獻採取對整張影像使用相同平面轉換參數將第一時間影像補償至第二時間,以達成背景補償目的,然而遠近不同景物並無法用同一平面轉換良好補償之,故部分文獻使用多次疊代來達到全域最佳化的方式以求取良好背景補償,但這樣的策略卻導致計算量龐大,故本研究改使用色彩分割作為景物切割依據,再依據各色塊給予不同平面轉換參數補償之,以色塊為單位之背景補償可涵蓋平移、旋轉、縮放等因素,明顯優於既有文獻以整張影像為單位折中的平移補償方式,而且使用色彩分割比起疊代求取全域最佳化的方式節省大量的運算時間。對於前後時間控制點匹配問題則根據固有的投影幾何關係提出搜尋區域概念,直接預估控制點下一時間可能位置,大幅縮減搜尋區域,降低運算時間,也同時提高正確匹配結果。
根據前述對既有文獻的改良,本研究所提出之演算法運算量低,而且可達到比既有文獻更優良的背景補償結果,強健快速的特性使本演算法更能達成即時偵測目的。
目 錄 I
圖 目 錄 III
表 目 錄 VI
第一章 簡介 1
1. 1問題背景與問題描述 1
1. 2文獻回顧 2
1.2.1 用於移動物體偵測之硬體種類及其對應技術 3
1.2.2 彩色數位相機移動物體偵測技術 6
1. 3研究方法 10
1. 4適用範圍與論文架構 12
第二章 適用於彩色數位雙相機之移動物體偵測技術 13
2. 1極幾何與基本矩陣 (epipolar geometry and fundamental matrix) 13
2. 2相機模型 (camera model) 16
2. 3視差圖技術 (disparity map) 18
2. 4 均值位移色彩分割 (mean shift color segmentation) 21
2. 5 平面投影轉換 (planar transformation) 24
2. 6 瞬時差異法 (temporal difference) 25
第三章 位於移動平台上之移動物體偵測技術 26
3. 1 快速強健之移動平台上之移動物體偵測技術 26
3.1. 1 測試前校正 29
3.1. 2 左右影像控制點對應與深度計算 30
3.1. 3 前後影像控制點對應 31
3.1. 4 色彩分割及計算二維平面補償參數 49
3.1. 5 背景補償及瞬時相減法 50
第四章 實驗結果與分析比較 52
4. 1 加入搜尋區域概念之於前後時間對應結果 52
4. 2 物深度分佈複雜場景測試結果 58
第五章 結論 69
5. 1本研究之貢獻 69
5. 2本研究的實用價值及未來發展方向 70
參考文獻 72
[1]. M. Peden, R. Scurfield, D. Sleet, D. Mohan, A. A. Hyder, E. Jarawan, and C. Mathers, Eds., World Report on Road Traffic Injury Prevention. Geneva, Switzerland: World Health Organization, 2004.
[2]. T. Gandhi and M. Trivedi, “Vehicle Surround Capture: Survey of Techniques and a Novel Omni Video Based Approach for Dynamic Panoramic Surround Maps,” IEEE Trans. Intelligent Transportation Systems, Vol.8, No.1, pp.108-120, 2006.
[3]. D. M. Gavrila, J. Giebel and S. Munder, “Vision-Based Pedestrian Detection: The PROTECTOR System,” IEEE Intelligent Vehicles Symposium, Parma, Italy, 2004.
[4]. U. Franke and S. Heinrich, “Fast Obstacle Detection for Urban Traffic Situations,” IEEE Trans. Intelligent Transportation Systems, Vol.3, No.3, pp.173-181, 2002.
[5]. Y. Fang, K. Yamada, Y. Ninomiya, B. Horn and I. Masaki, “A shape-independent method for pedestrian detection with far-infrared-images,” IEEE Transactions on Vehicular Technology, Vol.53, No.1,6, pp.1679-1697, 2004.
[6]. M. Z. Brown, D. Burschka, G. D. Hager, “Advances in Computational Stereo,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.25, No.8, pp.993-1008, 2003.
[7]. D. M. Gavrila and S. Munder, “Multi-cue Pedestrian Detection and Tracking from a Moving Vehicle,” International Journal of Computer Vision, Vol.73, No.1, pp.41–59, 2007.
[8]. T. Gandhi and M. M. Trivedi, “Pedestrian Protection Systems: Issues, Survey, and Challenges,” IEEE Trans. Intelligent Transportation Systems, Vol. 8, No. 3, pp.413-430, 2007.
[9]. T. Gandhi, M.M. Trivedi, “Pedestrian Collision Avoidance Systems: A Survey of Computer Vision Based Recent Studies,” IEEE Intelligent Transportation Systems Conference, pp. 976-981, 2006.
[10]. http://www.ifp.uni-stuttgart.de/forschung/index.en.html
[11]. F. L. Xu, X. L. and K. Fujimura, “Pedestrian detection and tracking with night vision, ”IEEE Tran. Intelligent Transportation Systems, Vol.6, No.1, pp.63-71, 2005.
[12]. Y.I. Abdel-Aziz, and H.M. Karara, “Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry,” Proceedings of the Symposium on Close-Range Photogrammetry, Falls Church, VA: American Society of Photogrammetry, pp.1-18, 1971.
[13]. http://www.7b.org/thermal/scope.html
[14]. http://www.seas.upenn.edu/~limingw/obj_det_accv07/
[15]. G. N. DeSouza and A. C. Kak, “Vision for Mobile Robot Navigation: A Survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 24, No. 2, pp.237-267, 2002.
[16]. D. Scharstein, R. Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” Int. J. Computer Vision, Vol. 47, pp.7–42, 2002.
[17]. Z. Member, G. Bebis and R. Miller, “On-Road Vehicle Detection: A Review,” IEEE Trans. Pattern Analysis And Machine Intelligence, Vol. 28, No. 5, pp.694-711, 2006.
[18]. J. Horn, A. Bachmann, and T. Dang , “Stereo Vision Based Ego-Motion Estimation with Sensor Supported Subset Validation,” IEEE Intelligent Vehicles Symposium Istanbul, Turkey, pp. 13-15, 2007.
[19]. H. Hatze, “High-Precision Three-Dimensional Photogrammetric Calibration and Object Space Reconstruction Using a Modified DLT-Approach,” J. Biomech, Vol.21, pp.533-538,1998.
[20]. J. Weng, P. Cohen, and M. Herniou, “Camera Calibration with Distortion Models and Accuracy Evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.14, No.10, pp.965-980, 1992.
[21]. Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pp.1330-1334, 2000.
[22]. M. Lili, Y.Q. Chen, K.L. Morre, “Flexible Camera Calibration Using a New Analytical Radial Undistortion Formula with Application to Mobile Robot Localization,” Proceedings of the IEEE International Symposium on Intelligence Control, pp.799-804, 2003.
[23]. F. Y. Wang, “A Simple and Analytical Procedure for Calibrating extrinsic Camera Parameters,” IEEE Trans. Robotics and Automation, Vol.20, No.1, pp.121-124, 2004.
[24]. F. Y. Wang, “An Efficient Coordinate Frame Calibration Method for 3-D Measurement by Multiple Camera Systems,” IEEE Trans. System, Man, and Cybernetics: Part C, Vol.35,No.5, 2005.
[25]. L. Lucchese, “Geometric calibration of digital cameras through multi-view rectification,” Image and Vision Computing, Vol.23, No.5, pp.517-539, 2005.
[26]. T. B. Moeslund, A. Hilton and V. Kruger, “A survey of advances in vision-based human motion capture and analysis,” Computer Vision and Image Understanding, Vol. 104, pp.90-126, 2006.
[27]. M. Irani, B. Rousso, S. Peleg, “Computing Occluding and Transparent Motions,” Int’l J. Computer Vision, Vol.12, pp.5-16, 1994.
[28]. M. Irani and P. Anandan, “ A unified approach to moving object detection in 2D and 3D scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.20, No.6, pp.577-589, 1998.
[29]. Y. Zhang, S. J. Kiselewich, W.A. Bauson, and R. Hammoud, “Robust Moving Object Detection at Distance in the Visible Spectrum and Beyond Using A Moving Camera,” Conference on Computer Vision and Pattern Recognition Workshop, 2006.
[30]. M. Irani, B. Rousso, S. Peleg, “Recovery of Ego-Motion Using Region Alignment,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.19, No.3, 1997.
[31]. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” DARPA Image Understanding Workshop, pp. 121-130, 1981.
[32]. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial intelligence, Vol.17, pp. 185-203, 1981.
[33]. J. L. Barron and N. A. Thacker, “Tutorial: Computing 2D and 3D Optical Flow,” Tina Memo, No.2004-012, 2005.
http://www.tina-vision.net/docs/memos/2004-012.pdf
[34]. W. Hu, T. Tan, L. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors," IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), Vol.34, No.3, pp.334-52, 2004.
[35]. M. Okutomi and T. Kanade, “A multiple baseline stereo,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.15, No.4, pp.353–363, 1993.
[36]. Y. Ohta and T. Kanade, “Stereo by intra- and inter-scanline search using dynamic programming,” IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-7(2), pp.139-154, 1985.
[37]. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, No.11, pp.1222–1239, 2001.
[38]. C. C. Erway, B. Ransford, “Variable Window Methods for Stereo Disparity Determination,” Ithaca, New York, May 2000.
[39]. S. Birchfield and C. Tomasi, “Multiway Cut for Stereo and Motion withSlanted Surfaces,” Proceedings of the Seventh IEEE International Conference Computer Vision, Sep. 1999.
[40]. T. Kanade and M. Okutomi, “A Stereo Matching Algorithm with anAdaptive Window: Theory and Experiment,” IEEE Trans. Pattern Anal. Machine Intell., Vol.16, No.9, pp.920-931, 1994.
[41]. H. Li, B. S. Manjunath, S. K. Mitra, “A contour-based approach to multisensor image registration,” IEEE Transactions on Image Processing, Vol.4, No.3, pp.320-334, 1995.
[42]. J. W. Hsieh, H. Y. M. Liar, K. C. Fan, M. T. Ko, Y. P. Huang, “Image registration using a new edge-based approach,” Computer Vision and Image Understanding, Vol.67, No.2, pp.112-130, 1997.
[43]. W. H. Wang, Y. C. Chen, “Image registration by control points pairing using the invariant properties of line segments,” Pattern Recognition Letters, Vol.18, No.3, pp. 269-281, 1997.
[44]. V. C. Shekgar, “Alignment using distributions of local geometric properties,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.21, No.10, pp.1031-1043, 1999.
[45]. V. Govindu, C. Shekgar, R. Chellapa, “Using geometric properties for correspondence-less image alignment,” Proceedings of the International Conference on Pattern Recognition ICPR’98, Brisbane, Australia, pp.37-41, 1998.
[46]. X. Dongjang, T. Kasparis, "A hierarchical approach to image registration using feature consensus and Hausdorff distance," Proceedings of the SPIE - The International Society for Optical Engineering, Vol.5428, No.1, pp.561-568, 2004.
[47]. 莊永裕, “Lecture 11: 3D Photography,” Digital Visual Effects, Spring 2008.
[48]. R. Hartley, A. Zisserman, “Multiple View Geometry in Computer Vision,” Cambridge University Press, 2003.
[49]. http://en.wikipedia.org/wiki/Epipolar
[50]. A. Tsalatsanis, K. Valavanis, A. Yalcin, “Vision Based Target Tracking and Collision Avoidance for Mobile Robots,” J Intell Robot Syst, Vol.48, pp.285–304, 2007.
[51]. D. Comaniciu, and P. Meer, “Mean Shift Analysis and Applications, ” Proc. Seventh Int’l Conf. Computer Vision, pp.1197-1203, 1999.
[52]. Y. Ukrainitz, B. Sarel, “Mean Shift Theory and Application,” 2008, http://web.missouri.edu/~hantx/ECE8001/notes/lecture8_mean_shift.pdf
[53]. R. Szeliski, “Image Alignment and Stitching:A Tutorial1,” Handbook of Mathematical Models in Computer Vision, pp.273–292, 2005.
[54]. S. Y. Elhabian, K. M. El-Sayed and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques - State-of-art,” Recent Patents on Computer Science, Vol. 1, pp.32-54, 2008.
[55]. N. Bocheva, “Detection of motion discontinuities between complex motions,” Vision Research, Vol.46, pp.129–140, 2006.
[56]. J. Schmüdderich, V. Willert, J. Eggert, S. Rebhan, C. Goerick and G. Sagerer, “Estimating object proper motion using optical flow, kinematics, and depth information,” IEEE Trans. Sytems., Man, and Cybernetics - PartB: Cybernetics, Vol.38, No.4, pp.1139-1151, 2008.
[57]. E. Durucan, T. Ebrahimi, “Moving Object Detection Between Multiple and Color Images,” IEEE Conference on Advanced Video and Signal Based Surveillance, 2003.
[58].石明于,黃鐘賢, 趙盈盈, 張耀仁, 富博超, ”行進間移動物體偵測技術_Moving Object Detection on Moving Platforms”, ICL Technical Journal, Vol.120, pp.32-40, 2007.
[59]. T. Suk, J. Flusser, "The features for recognition of projectively deformed point sets," Proceedings International Conference on Image Processing (Cat. No.95CB35819), Vol.3, pp. 348-351, 1995.
[60]. A. Discant, A. Rogozan, C. Rusu and A. Bensrhair, “Sensors for Obstacle Detection A Survey,” 30th International Spring Seminar on Electronics Technology, Cluj-Napoca ROMANIA, pp. 100-105, 2007.
[61]. A. Kale, A. Sundaresan, A. N. Rjagopalan, N. Cuntoor, A. R. Chowdhury, V. Krnger and R. Chellappa, “Identification of humans using gait,” IEEE Transactions on Image Processing, Vol. 9, pp.1163-1173, 2004.
[62]. C. Harris, M. Stephens, “A combined corner and edge detector,” Alvey Vision Conference, pp. 147-152, 1988.
[63]. D. Comaniciu, and P. Meer, “Mean Shift: A Robust Approach Toward Feature space Analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.24, pp.603-619, 2002.
[64]. D Scharstein, R Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” International Journal of Computer Vision, Vol. 47, pp.7-42, 2002.
[65]. B. Georgescu, I. Shimshoni and P. Meer, “Mean Shift based clustering in high dimensions: a testure classification example ,” Proc. Of International Conference on Computer Vision, pp.456-463, 2003.
[66]. R. Lens and R. Tsai,“Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol10, No5 , pp.713-720, 1988.
[67]. R. M. Haralick, "Using perspective transformations in scene analysis," Computer Graphics and Image Processing, Vol.13, No.3, pp.191-221, 1980.
[68]. S. Baker And I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework,” International Journal of Computer Vision, Vol.56, No.3, pp.221–255, 2004.
[69]. T. Y. Chen, A. C. Bovik, L. K. Cormack, “Stereoscopic Ranging by Matching Image Modulations,” IEEE Transactions on Image Processing, Vol.8, No.6, pp.785-797, 1999.
[70]. X. Armangue, H. Ara'ujob, J. Salvi, “A review on egomotion bymeans of differential epipolar geometry applied to the movement of a mobile robot,” Pattern Recognition, Vol.36, pp.2927-2944, 2003.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *