透過您的圖書館登入
IP:18.188.152.162
  • 學位論文

影像處理技術結合機械手臂控制之應用

Image processing technology combined with robot arm control application

指導教授 : 韓維愈
共同指導教授 : 李增奎(Zen-Gkui Li)

摘要


隨著生產自動化的需求增加,機械手臂已經成為工廠不可或缺的設備,它的應用也由固定動作的組裝或搬運功能,發展成為可以適應不同環境的智慧機械手臂,最常見的方案就是將電腦視覺與機械手臂控制整合,讓智慧機械手臂系統可以看見物件,並依據物件特性與位置,自動規劃機械手臂動作。 由影像中獲取物件位置資訊的常用方式包含由雙鏡頭攝影機所擷取之影像,計算求得物件與參考點3D座標,或是以3D深度攝影機直接獲得位置資訊,進而求出機械手臂抓取物件所需之空間移動距離,該方案需要高精度的機械手臂,才能精確的抓取物件。對於低精度機械手臂在多次移動後由於誤差的累積,會造成機械手臂無法正確移到物件所在位置。 本研究針對低精度機械手臂提出一種結合電腦視覺的控制方案,所提方法使用單一鏡頭擷取影像,並由連續影像中獲取機械手臂與物件相對位置,持續控制機械手臂移動到物件位置並抓取物件。本系統包含2個部分,第一部分是以攝影機擷取影像後,透過影像校正、HSV色彩空間轉換、邊緣化找出物件與夾爪位置,第二部分將位置座標資訊透過MQTT傳給樹莓派(Raspberry pi)進行6軸機械手臂控制。系統以Python程式語言作為開發工具,利用OpenCV工具建構系統視覺所需功能。

關鍵字

電腦視覺 Python OpenCV 機械手臂 MQTT 相機校正 樹莓派

並列摘要


With the increasing demand for production automation, the robotic arm has become an indispensable device for the factory. Its application has also evolved from a fixed action assembly or transfer function to a smart robot arm that can adapt to different environments. The most common solution is to integrate computer vision and robotic arm controls makes the robotic arm system can automatically plan robotic arm movements based on the characteristics and position of the object. Normally, we can use the image captured by the dual-camera to calculate the 3D coordinate of the object and the reference point, or use the 3D depth camera to directly obtain the position information, and use the aforementioned information to calculate the space movement required for the robot arm to capture the object. This program requires high-precision robotic arms to accurately grasp objects. For low-precision robotics, the robotic arm will move to the wrong position due to the cumulative error of multiple moves. This study proposes a control scheme combining computer vision for low-precision robotic arms. The proposed method uses a single camera to capture images, and acquires the relative positions of robotic arms and objects from continuous images, and continuously controls the robot arm to move to the position of the object and grabs it. The system consists of two parts. The first part uses the camera to capture the image and then uses Camera calibration, HSV color space transfer, and edge features to find out the position of the object and jaws. The second part uses MQTT to transmit the position coordinate information to the Raspberry pi to control the 6-axis robot arm. The system uses the Python programming language as a development tool and uses OpenCV tools to construct the functions required for system vision.

並列關鍵字

Computer Vision Python OpenCV Robotic arm MQTT Camera calibration Raspberry Pi

參考文獻


[1] 游秋榮,「應用立體視覺之自動停車系統設計與實現」,國立臺北科技大學,碩士學位論文,民國九十八年。
[2] Jih-Gau Juang, Yi-Ju Tsai and Yang-Wu Fan, “Visual Recognition and Its Application to Robot Arm Control ”, applied sciences, pp. 851-880, 2015.
[3] P. Hemalatha, C. K. Hemantha Lakshmi,and S.A.K.Jilani, “Real time Image Processing based Robotic Arm Control Standalone System using Raspberry pi”, SSRG International Journal of Electronics and Communication Engineering , pp. 18~21, 2015.
[4] Ali Rouhollahi, Mehdi Azmoun, Mehdi Tale Masouleh, “Experimental study on the visual servoing of a 4-DOF parallel robot for pick-and-place purpose”, 2018 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), pp.27~30, 2018.
[5] OpenCV 2.4.13.7文檔>> OpenCV教程>>calib3d模塊,相機校準和3D重建, https://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html

延伸閱讀