透過您的圖書館登入
IP:3.21.43.192
  • 學位論文

利用雙鏡面環場影像攝影和超音波感測技術作戶外自動車學習與導航之研究

A Study on Learning and Guidance for Outdoor Autonomous Vehicle Navigation by Two-mirror Omni-directional Imaging and Ultrasonic Sensing Techniques

指導教授 : 蔡文祥

摘要


隨著電腦視覺技術之發展,立體式攝影機逐漸受到歡迎。本研究使用一種新型的立體攝影機與新的引導技術來建製一台機械導盲犬,用以帶領使用者在人行道環境中行走。 在本論文中,我們提出一個新型立體式攝影機的設計方法與公式,讓使用者可以輕易地設計一支立體式攝影機。接著提出一種基於空間對映法的攝影機校正方法來校正此立體式攝影機。基於同軸旋轉不變性質,我們提出適用於此新型立體攝影機的立體資訊之計算方法;不同於其他計算方法,本系統無須將環場影像轉換為全景影像即可計算影像對映點與立體資訊。此外自動車在行走中累積的機械誤差會影響導航時的計算,對此我們亦提出一個誤差校正模型以解決問題。接著我們發展一動態調整相機感光值的方法與一動態調整參考值的方法,來適應環境中不均勻亮度的問題。 我們在本系統中以人行道之路延石作為行走時的特徵點,提出了兩種擷取特徵點的方法。當自動車進行學習時,系統計算特徵點的立體值,找出行走時的方向與距離並據以自動行走,同時自動紀錄與分析路徑點,以建立環境地圖。此外我們亦提出一種人機互動的技術,允許使用者在任何時候都可以手勢控制自動車,此時系統將關閉計算特徵點的程序,並進行盲走之程序。 當自動車進行導航模式時,我們亦提出一種分析串列超音波訊號的方法,使自動車能配合使用者的速度,調整自身行走速度並帶領使用者於環境中行走。接著我們提出一種改良的閃避障礙物之方法與自動車座標計算之方法,讓自動車能在環境中判斷障礙物的高度,並進行閃避。最後我們提出相關的實驗結果證明本系統的完整性與可行性。

並列摘要


With the progress of development in computer vision technologies, 3D stereo cameras nowadays become more popular than in the past. In this study, a new imaging device and new guidance techniques are proposed to construct an autonomous vehicle for use as a robot guide dog navigating on sidewalks to guide the blind people. A general formula for designing a new stereo camera consisting of two mirrors and a single conventional projective camera is proposed. People can use the formula to design other stereo cameras easily. Then, a calibration technique based on a so-called pano-mapping technique for this type of camera is proposed. Using an autonomous vehicle to navigate in the environment, the incrementally increasing mechanical error is a big problem in the experiment. A calibration model based on the curve fitting technique is proposed to correct such errors. Also, a 3D data acquisition technique using the proposed two-mirror omni-camera based on the rotational invariance property of the omni-image is proposed. The 3D data can be obtained directly without transforming taken omni-images into panoramic images. The autonomous vehicle is designed to follow the curb line of the sidewalk using the line following technique. In the path learning procedure, two methods are proposed to extract the curbstone feature points. If there exits no curbstone features or the features are hard to extract, a new human interaction technique using hand pose position detection and encoding is proposed to issue a user’s guidance command to the vehicle. To adapt the adopted image processing operations to the varying light intensity condition in the outdoor environment, two techniques, called dynamic exposure adjustment and dynamic threshold adjustment are proposed. To create a path map, a path planning technique is proposed, which reduces the number of the resulting path nodes in order to save time in path corrections during navigation sessions. In the navigation procedure after learning, there may exits unexpected obstacles blocking the navigation path. A technique using a concept of virtual node is proposed to design a new path to avoid the obstacle. Finally, to allow the vehicle to guide a blind person to walk smoothly on a sidewalk, a sonar signal processing scheme is proposed for synchronization between the speed of the vehicle and that of the person, which is based on computation of the location of the vehicle with respect to the person using the sonar signals. A series of experiments were conducted on a sidewalk in the campus of National Chiao Tung University. And the experimental results show the flexibility and feasibility of the proposed methods for the robot guide dog application in the outdoor environment.

參考文獻


[1] J. Borenstein and I Ulrich, “The GuideCan - A Computerized Travel Aid for the Active Guidance of Blind Pedestrians,” Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, Apr. 21-27, 1997, pp. 1283-1288.
[4] The Robot World.
[5] National Yunlin University of Science and Technology.
[8] S. K. Nayar, “Catadioptric Omni-directional Camera,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 482-488, June 1997, San-Juan, Puerto Rico.
[11] Z. Zhu, “Omnidirectional Stereo Vision,” 10th IEEE ICAR, August 22-25, 2001, Budapest, Hungary.

被引用紀錄


蕭淵元(2014)。具道路影像導引之智慧型自走車控制〔碩士論文,國立虎尾科技大學〕。華藝線上圖書館。https://www.airitilibrary.com/Article/Detail?DocID=U0028-2307201420243600
廖國龍(2017)。影像伺服割草機器人研製〔碩士論文,國立虎尾科技大學〕。華藝線上圖書館。https://www.airitilibrary.com/Article/Detail?DocID=U0028-2707201715120400

延伸閱讀