透過您的圖書館登入
IP:3.133.128.39
  • 學位論文

堆疊物件夾取自動化流程之建構

Construction of an Automation Process for Robotic Objects Grasping in Clutter

指導教授 : 李志中
本文將於2024/08/01開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


本研究旨在針對隨機堆疊物件建構一套自動化的夾取流程。整體夾取流程首先透過RGBD相機取得堆疊物件場景影像,接著將其輸入至Mask RCNN模型分離出單一物件影像,隨後搭配擴增式編碼器(AAE) 估測姿態並以夾取經驗轉移預測可行之夾取點,最後深度影像進行夾取干涉判定篩選最終夾取點,供機械手臂執行物件分類夾取。針對需耗費大量人力與時間收集及標註訓練資料的問題,本研究透過渲染模擬軟體合成大量訓練資料,同時以程式自動化標註資料集;為避免真實與虛擬之間的領域差異(Domain Gap),以領域隨機化(Domain Randomization) 的方式增加合成影像的多樣性,使以合成影像訓練的深度學習模型也能應用於真實世界的場景。本研究亦針對夾取經驗的標註,藉由ArUco Markers 之標記,建立自動標註流程,以減少人工的介入。最後為驗證所提出之自動化夾取流程可實際應用於真實場景,本研究架設一機械手臂夾取系統進行物件夾取測試。經測試,在準備訓練資料及夾取經驗標註部份,可節省大量時間之外,且在金屬圓管、塑膠T 型水管與金屬L 型門把的單一堆疊物件夾取以及混合物件夾取,分別有93%、90.9%、71.4% 與85.1% 的夾取成功率。

並列摘要


A process is proposed for the automation of robotic objects grasping and classification in cluttered environments in this thesis. The pipeline first uses instance segmentation model to segment each object in clutter, then applies the augmented autoencoder to find the object pose and achieve feasible grasping candidates via grasping experience transfer. Finally, to avoid gripping collision, the pipeline uses depth information to determine optimal grasping points for the robot. More specifically, this thesis focuses on developing a process to automatically generate synthetic training dataset by the simulation software combined with 3D CAD model of objects. In addition, a novel algorithm is proposed to automatically annotate the feasible grasping points obtained from grasping experience transfer. To verify the effectiveness of the process, a real robotic grasping system is set up for the experiment. It is shown that the success rates of grasping the objects as metal tube, plastic T-shape pipe, metal L-shape handle and mixed objects can be up to 93%, 90.9%, 71.4% and 85.1%, respectively.

參考文獻


[1] D. Morrison, P. Corke, and J. Leitner. 2020. “Learning robust, realtime, reactive robotic grasping." The International journal of robotics research, vol. 39, no. 2–3, pp. 183–201.
[2] C. University. “Cornell grasping dataset." http://pr.cs.cornell.edu/ grasping/rect_data/data.php (accessed May, 3, 2022).
[3] A. Depierre, E. Dellandréa, and L. Chen. 2018.“Jacquard: A large scale dataset for robotic grasp detection." in 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 3511–3516.
[4] A. Zeng, S. Song, K.T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, and others. 2018. “Robotic pickandplace of novel objects in clutter with multiaffordance grasping and crossdomain image matching." in 2018 IEEE international conference on robotics and automation (ICRA), pp. 3750–3757.
[5] 王仁蔚. 2019. “以實例切割與表徵學習應用於機械臂夾取堆疊物件." 碩士論文, 國立台灣大學機械工程研究所, 台北市, 台灣.

延伸閱讀