透過您的圖書館登入
IP:3.138.134.102
  • 學位論文

基於單眼視覺之無人飛行器室內定位與目標追蹤視覺伺服系統

Monocular Vision-Based Unmanned Aerial Vehicle Visual Servo System for Indoor Positioning and Target tracking

指導教授 : 連豊力
本文將於2025/06/01開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


本篇論文主要探討議題為無人飛行器在室內環境或者無法仰賴外界感測的情境下,感知自身與周圍環境的狀態。達到這一階段後,本篇論文致力於開發實際的情境,以自身機體感測器之限制,還有外在條件限制下,可以完成的工作任務。也就是利用最小的感測器及運算成本達成閉環自動控制系統之目標。 本篇論文提出的系統主要分為兩部分: 第一區塊即時影像處理系統,仰賴機載最小成本感測器單眼相機,單眼相機傳送即時影像,隨後對這些影像進行相關的影像處理演算法,主要分為色彩濾波器及影像銳化濾波器,並以"串連"的結構輸出最終影像,這樣的作法不僅針對人工設計好的標誌物可以捕捉到目標物件,即便是自然界存在的物質亦可實現,且提出的影像處理演算法相較於 Visual SLAM 或者機器學習的方式在地面基地站運算成本考量上為更適合的解決方案。 最後在區塊末端輸出影像特徵,所謂「特徵」指的是用來送進閉環控制系統之影像訊號。即時影像處理系統這個區塊所處理的問題皆是機器人自身與環境之關係描述,透過影像連結,或者說使得機器人認知自身與周遭之關係。  第二區塊則是視覺伺服控制系統,輸入為即時影像處理系統送出經影像處理演算法處理後的控 制訊號,這個控制訊號以二維影像平面的座標位置,二維影像平面的面積組成一組向量空間。 控制訊號與控制參考形成誤差項,經由誤差項輸入線性控制器改變當前狀態,形成閉環結構。線性控制器設計考慮到機器人可以控制的維度: 水平、垂直、深度三個維度設計各自獨立的 PID 控制器,透過改變不同組合的線性控制器對控制表現進行量化分析。由於控制訊號及控制參考皆來自影像處理演算法的結果,也就是本論文完全仰賴「影像」達成自動控制,故稱視覺伺服控制系統。 本論文著重於開發出最小成本花費應用案例,完全以實驗方式展現理論演算法結果,因此系統設計完成之時伴隨著靜態標誌物相對定位及動態目標物追蹤之情境。靜態標誌物相對定位情境中為了展現本論文影像處理演算法之穩健性,使用不提供姿態解算的標誌物,這也是與目前學術團隊最大差異之處;動態目標物追蹤之情境則設計領導與追隨者機體。追隨者進行視覺伺服控制演算法對領導者完成追蹤任務。

並列摘要


This thesis mainly discusses the topic of unmanned aerial vehicle sensing the state of itself and the surrounding environment in an indoor environment or in a situation where it cannot rely on external sensing. After reaching this stage, this thesis is devoted to developing the actual situation, the work tasks that can be accomplished with the limitations of the airframe own sensors and external conditions. That is to achieve the goal of a closed-loop automatic control system with the smallest sensor and minimum computational cost. The system proposed in this paper is mainly divided into two parts: The real-time image processing system of the first block relies on the monocular camera with the lowest cost sensor onboard. The monocular camera transmits real-time images, and then performs related image processing algorithms on these images, it is mainly divided into color filter and image sharpening filter, and outputs the final image in a "serial" structure. This method can not only capture the target object for artificially designed markers, but also the substances that exist in nature. Compared with Visual SLAM or machine learning, the proposed image processing algorithm is a more suitable solution in terms of computing cost of ground base station. Finally outputs the image features at the end of the block. The so-called " features" refers to the image signal used to feed the closed-loop control system. The problems dealt with in this block of real-time image processing system are all descriptions of the relationship between the robot itself and the environment. Through image connection, or in other words, the robot can recognize the relationship between itself and its surroundings. The second block is the visual servo control system. The input is the control signal sent by the real-time image processing system after being processed by the image processing algorithm. The control signal is composed of the coordinate position of the two-dimensional image plane and the area of the two-dimensional image plane vector space. The control signal and the control reference form an error term, and the current state is changed by the error image input to the linear controller to form a closed-loop structure. The design of the linear controller takes into account the dimensions that the robot can control: horizontal, vertical, and depth three independent PID controllers are designed, and the control performance is quantitatively analyzed by changing different gain of linear controllers. Since the control signal and control reference are both derived from the results of the image processing algorithm, this paper completely relies on the "image" to achieve automatic control, so called visual servo control system. This thesis focuses on developing an application case with minimal cost, and fully demonstrates the results of the theoretical algorithm in an experimental way. Therefore, the system design is completed with the relative positioning of static markers and the situation of dynamic target tracking. In the static marker relative positioning scenario, in order to demonstrate the robustness of the image processing algorithm in this thesis, markers that do not provide attitude calculation are used, which is also the biggest difference from the current academic team; the dynamic target tracking scenario is designed to lead airframe with follower. The follower performs the visual servo control algorithm to complete the tracking task for the leader.

參考文獻


[1: Shakhatreh et al. 2019]Hazim Shakhatreh, Ahmad H. Sawalmeh, Ala Al-Fuqaha, Zuochao Dou, Eyad Almaita, Issa Khalil, Noor Shamsiah Othman, Abdallah Khreishah and Mohsen Guizani, “Unmanned Aerial Vehicles (UAVs): A Survey on Civil Applications and Key Research Challenges,” in IEEE Access vol.7, pp.48572-48634, April 2019
[2: Chen Wang 2005]YangQuan Chen and Zhongmin Wang,"Formation Control: A Review and A New Consideration," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005
[3: Rafique Lynch 2020]Muhammad Awais Rafique Alan F. Lynch, “Output-Feedback Image-Based Visual Servoing for Multirotor Unmanned Aerial Vehicle Line Following,” in IEEE Transactions on Aerospace and Electronic Systems, Vol. 56, No. 4, pp. 3182-3196, January 2020
[4: Wang et al. 2020]Shuaijun Wang, Fan Jiang, Bin Zhang, Rui Ma, Qi Hao, “Development of UAV-Based Target Tracking and Recognition Systems,” in IEEE Transactions on Intelligent Transportation Systems, Vol. 21, No. 8, pp. 95-102, Aug. 2020
[5: He et al. 2020]Jinhao He, Yuming Zhou, Lixiang Huang, Yang Kong, Hui Cheng, “Ground and Aerial Collaborative Mapping in Urban Environments,” in IEEE Robotics and Automation Letters, Vol. 6, No. 1, pp. 95-102, January 2021

延伸閱讀