現今研究中,智慧型空間之感測網路多半採視覺為感測元件,且已知彼此之間位置關係,以利移動型機器人在智慧型空間中導航與控制。本論文提出一套不同於以往智慧型空間與移動型機器人相互配合之全區域環境地圖建立系統。智慧型空間採用IP 攝影機為感測單元, 分別監控著不同的區域且彼此位置關係未知。移動式機器人上則分別安置了兩部十字雷射結構光, 透過其單眼攝影機去擷取投射於障礙物上的雷射光束,並將所擷取到的雷射光重建於現實三維空間中,藉此去判斷 障礙物位置。另外, 對應於機器人移動區域的感測攝影機, 透過傳送安置於機器人上的格點特徵給機器人,可即時校準出彼此之間關係。移動型機器人透過雷射探索與IP 攝影機校準資訊相互結合,可建立以該IP 攝影機為主的區域環境地圖,再透過本論文中所設計的路徑規劃法則,搜尋空間攝影機可視範圍重疊區域,可求得IP攝影機彼此之間的關係,因此,可將建立之區域地圖整合成包含障礙物的全區域環境地圖,並對機器人進行導航。最後,本論文於室內空間中架設三台IP攝影機以建構智慧型環境,並以一自製移動式機器人驗証所提出之全區域地圖建立系統,導航移動型機器人於智慧型空間。
For most of the existing studies, vision sensors are frequently employed in sensor networks of intelligent spaces, and positions of vision sensors are mostly assumed to be known for facilitating navigation and control of mobile robots in an intelligent space. This research proposes a system differing from conventional global map building systems, which builds a global environment map with the cooperation between a mobile robot and an intelligent space. The proposed intelligent space employs stationary IP cameras with unknown positions which monitor different areas independently. A mobile robot, mounted with two laser cross line projectors, projects double cross laser lines on obstacles which can be observed by an onboard camera. Based on the geometrical relation between the camera and laser projectors, the laser pattern can be reconstructed in Cartesian space. Meanwhile, the IP camera detects the features on the mobile robot to determine the coordinate transformation between the mobile robot and the IP camera on-line. The global map with obstacles can then be built by the reconstructed laser line and the coordinate transformation between the mobile robot and the IP camera. The coordinate transformation between each pair of stationary IP cameras can also be determinedbased on their coordinate transformations with respect to the mobile robot. Therefore, the global map of the intelligent space can be further expanded by the newly located IP cameras. Furthermore, this paper also proposes a navigation algorithm to avoid obstacles and also explore the intelligent space. The proposed approach can improve system efficiency and avoid the calibration problems in existing intelligent spaces. Moreover, the proposed system has been successfully validated by experimenting with a custom-made mobile robot in a laboratory environment equipped with three stationary IP cameras.