透過您的圖書館登入
IP:3.145.167.176
  • 學位論文

以環場影像和特徵比對的技術在書店或圖書館作擴增實境式導覽和資訊檢索

Augmented Reality-based Navigation and Information Retrieval for Book Search in Bookstores and Libraries Using Omni-vision and Feature-matching Techniques

指導教授 : 蔡文祥

摘要


本研究提出了一個應用於書店和圖書館的室內導覽系統,該系統結合了電腦視覺和擴增實境的技術,利用天花板上的魚眼攝影機當基礎硬體架構,並採用主從式系統和行動裝置,提供更直覺的擴增實境介面,供使用者使用。 首先,伺服器端利用魚眼攝影機影像分析使用者的活動資訊,藉此來辨識和偵測多使用者的移動情形,並將分析結果回傳到使用者的行動裝置上。同時,使用者端會傳送行動裝置上的影像至伺服器端進行書籍辨識,並回傳周圍的環境與書籍資訊以及導引路徑到使用者端。使用者端則會用擴增實境的方式顯示資訊,藉以導引使用者抵達目的地。 在書封與書籍辨識方面,本研究採用兩個方法來分別辨識書封與書背影像。首先,伺服器端接收使用者端的相機影像,進行影像切割的步驟,接著利用特徵演算法進行影像比對。在比對的時候,對書封本研究採用ORB (Oriented and Rotated BRIEF)比對演算法;對書背本研究則採用加速型穩健特徵(speeded-up robust feature, SURF)比對演算法,兩方法皆是利用伺服器端事先建立的書籍資料庫進行影像辨識。 為了加速大量資料特徵比對的速度,本研究採用了一個結合「多執行緒」(multi-thread)和「多探針局部敏感雜湊」(multi-probe local sensitive hashing, multi-probe LSH)演算法的兩階段方法,來改善特徵比對的速度。第一階段的方法把所有的ORB「特徵描述子」 (feature descriptor)分配到每條執行緒(thread),並在其上藉由multi-probe LSH 演算法建立多個雜湊表(hash table);第二階段則將相機影像抽取出的特徵與所有雜湊表進行特徵比對,來得到最後結果。 針對擴增實境導覽和資訊檢索,使用者端會傳送行動裝置上的影像至伺服器端,伺服器端藉由ORB和SURF比對演算法和資料庫的書封與書背影像分別進行比對,並把結果回傳到使用者端,以擴增實境的方式顯示出來。本研究也使用了一個藉由環境地圖產生無碰撞路徑的路徑規劃技術,讓使用者可以從所在位置到達所搜尋書籍的位置。最後,本研究所提系統會將導引路徑和書籍資訊覆蓋在行動裝置影像中對應的真實物件上,來提供擴增實境導覽畫面供使用者觀看。 上述方法的實驗結果良好,顯示出本研究所提系統與方法確實可行。

並列摘要


When people go to places or get into complicated indoor environments, such as supermarkets, malls, bookstores, etc., they might get lost or have no idea about how to reach desired locations or merchandises. Generally, they will ask the store stuff to guide them to the destination. In this study, an indoor navigation system based on augmented reality (AR) and computer vision techniques for applications in bookstores and libraries by the use of mobile devices is proposed. At first, an indoor infra-structure is set up by attaching fisheye cameras on the ceiling of the navigation environment. The locations and orientations of multiple users are detected from the images acquired with the fisheye cameras by a server-side system, and the analysis results are sent to a client-side system on each user’s mobile device. Meanwhile, the server-side system analyzes as well the acquired images to recognize book items and sends the surrounding environment information, book information, and the navigation path to the client-side system. The client-side system then displays the information in an AR way, which provides clear information for each user to conduct navigation to a destination. For book spine and cover recognition, two methods are adopted to recognize book spine and cover images, respectively. At first, the server-side system receives the image captured by the client-device camera. Secondly, an image segmentation process is performed. Finally, matching is conducted against a pre-constructed book spine/cover image database by SURF and ORB algorithms for book spine and cover recognition, respectively. To speed up feature matching, a two-stage improvement method is proposed, which combines the multi-probe LSH method and multi-thread processing. In the first stage, all descriptors of the ORB algorithm are distributed to a pre-selected number of threads, and multiple hash tables for use by the multi-probe LSH method are constructed. In the second stage, the input features are matched against all the tables to obtain a best result. For AR-based navigation and book information retrieval, the client-side system sends the images captured by the client-device camera to the server-side system. Then, the server-side system analyzes them by the SURF and ORB algorithms and matches the resulting features against the pre-constructed book spine/cover image database. The result with corresponding information is transmitted to the client-side system for display in an AR way. A path planning technique for generating a collision-free path from a spot to a selected book item via the use of an environment map is also employed. Finally, the navigation and book information is overlaid onto the images shown on the mobile-device screen for the user to inspect. Good experimental results are also included to show the feasibility of the proposed system and methods; and precision measures and statistics are included to show the system’s effectiveness in handling real conditions in bookstores and libraries.

參考文獻


[3] C. Barberis, A. Bottino, G. Malnati and P. Montuschi, “Experiencing Indoor Navigation on Mobile Devices,” IT Professional, vol. 16, Issue 1 pp.50-57 Feb. 2014
[4] K. Jongbae and J. Heesung, “Vision-Based Location Positioning Using Augmented Reality for Indoor Navigation,” IEEE Transactions on Consumer Electronics, vol. 54, Issue 3 pp. 954-962, Aug. 2008.
[5] T. Miyashita, P. Meier, T. Tachikawa, S. Orlic, T. Eble, V. Scholz, A. Gapel , O. Gerl, S. Arnaudov and S. Lieberknecht, “An Augmented Reality Museum Guide,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality(ISMAR), Cambridge, U.K, pp. 103-106, 2008.
[9] D. G. Lowe, “Distinctive Image Features from Scale-invariant Keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, Nov. 2004.
[15] M. Y. Hsieh and W. H. Tsai, “A Study On Indoor Navigation by Augmented Reality and Down-Looking Omni-Vision Techniques Using Mobile Devices,” in Proceedings of Computer Vision,Graphics, and Image Processing, Sun Moon Lake, Taiwan, Aug 2012.

被引用紀錄


黃竹莉(2018)。論內線交易重大消息之明確時點〔碩士論文,中原大學〕。華藝線上圖書館。https://doi.org/10.6840/cycu201800009

延伸閱讀