簡易檢索 / 詳目顯示

研究生: 陳境浩
Chen, Jing-Hao
論文名稱: 基於人類演示學習之機械手臂自動化控制技術
Towards the Flexible Automation Robot Control Technology through Learning from Human Demonstration
指導教授: 蔣欣翰
Chiang, Hsin-Han
許陳鑑
Hsu, Chen-Chien
學位類別: 碩士
Master
系所名稱: 電機工程學系
Department of Electrical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 78
中文關鍵詞: 人類演示學習機械手臂機器視覺人工智慧人機互動
英文關鍵詞: Learning from human demonstration, Robotic arm, Machine vision, Artificial intelligence, Human-robot interaction
DOI URL: http://doi.org/10.6345/NTNU201900906
論文種類: 學術論文
相關次數: 點閱:66下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 摘  要 i ABSTRACT ii 誌  謝 iv 目  錄 v 圖 目 錄 viii 表 目 錄 xi 第一章 緒論 1 1.1 研究動機與背景 1 1.2 文獻回顧 4 1.3 論文架構 14 第二章 基於人類演示學習之系統設計 15 2.1 系統架構 15 2.2 實驗平台 16 2.3 硬體實現環境 17 2.4 軟體層面介紹 25 第三章 基於人類演示學習之控制策略 28 3.1 影像檢測 28 3.2 人體骨架偵測 31 3.3 機械手臂運動學 34 第四章 雙手臂演示學習系統 43 4.1 雙手臂機構設計 43 4.2 通訊與控制策略 44 4.3 機械手臂視覺系統 46 4.4 雙手臂虛擬系統 54 第五章 實驗與檢測結果 58 5.1 基於YOLO v2與YOLO v3之物件辨識實驗結果 58 5.2 基於人類演示學習之雙手臂虛擬系統模擬 61 5.3 基於人類演示學習之雙手臂實際夾取實驗結果 65 第六章 結論與未來展望 75 6.1 結論 75 6.2 未來展望 75 參考文獻 76

    [1] B. Albert, “Organisational applications of social cognitive theory,” Australian Journal of management, vol.13, no. 2, pp. 275-302, 1998.
    [2] Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
    [3] C. Sang, et al., “Lead-through robot teaching,” 2013 IEEE Conference on Technologies for Practical Robot Applications, July 2013, pp. 1-4.
    [4] L. Chen, Z. Wei, F. Zhao and T. Tao, “Development of a virtual teaching pendant system for serial robots based on ROS-I,” In Proc. IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics, China, 2017, pp.720-724.
    [5] H.-I. Lin and Y.-H. Lin, “A Novel Teaching System for Industrial Robots,” Sensors, vol. 14, no. 4, pp.6012-6031, June 2014.
    [6] P. Neto, J. N. Pires and A. P. Moreira, “Accelerometer-based control of an industrial robotic arm,” In Proc. 18th IEEE International Symposium on Robot and Human Interactive Communication, Japan, 2009, pp.1192-1197.
    [7] P. Shenoy, K. J. Miller, B. Crawford and R. N. Rao, “Online electromyographic control of a robotic prosthesis,” IEEE Trans. Biomed. Eng., vol. 55, no. 3, pp.1128-1135, Mar. 2008.
    [8] T. Yu, C. Finn, A. Xie, S. Dasari, T. Zhang, P. Abbeel and S. Levine, “One-shot imitation from observing humans via domain-adaptive meta-learning,” arXiv preprint arXiv: 1802.01557, Feb. 2018.
    [9] C. Finn, T. Yu, T. Zhang, P. Abbeel and S. Levine, “One-shot visual imitation learning via meta-learning,” arXiv preprint arXiv: 1709.04905, Sep. 2017.
    [10] M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn and A. Zisserman, “The PASCAL Visual Object Classes Challenge,” Int'l J. Computer Vision, vol. 88, no. 2, pp. 303-338, June 2010.
    [11] T.-Y. Lin, et al., “Microsoft COCO: Common objects in context,” In Proc. Eur. Conf. Comput. Vis., 2014, pp. 740-755.
    [12] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., USA, June 2009, pp. 248-255.
    [13] P. Felzenszwalb, R.Girshick, D. McAllester and D. Ramanan, “Object detection with discriminatively trained partbased models,” PAMI, vol. 32, no. 7, pp. 1627-1645, Sep. 2010.
    [14] R. B. Girshick, J. Donahue, T. Darrell and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” In Proc., IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Ohio, Jun. 2014, pp. 580-587.
    [15] G. Ross, “Fast R-CNN,” In Proc., of the IEEE international conference on computer vision, Chile, Dec. 2015, pp. 1440-1448.
    [16] S. Ren, K. He, R. Girshick and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” In Proc. arXiv:1506.01497, Canada, Dec. 2015.
    [17] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, “You only look once: Unified real-time object detection,” In Proc. IEEE Conf. Comput. Vis. Pattern Recogn., Lasvegas, July 2016, pp. 779-788.
    [18] W. Liu, et al., “SSD: Single shot multibox detector,” In Proc. Eur. Conf. Comput. Vis., Lasvegas, July 2016, pp. 21-37.
    [19] A. Jain and C. C. Kemp, “EL-E: An assistive mobile manipulator that autonomously fetches objects from flat surfaces,” Auton. Robots, vol. 28, pp. 45-64, Jan. 2010.
    [20] S. Levine, N. Wagener and P. Abbeel, “Learning contact-rich manipulation skills with guided policy search,” in Proc. arXiv preprint arXiv:1501.05611, May 2015.
    [21] P. Devendrakumar and S. M. Kakade “Dynamic hand gesture recognition using kinect sensor,” In Proc. 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC), India, Dec. 2016, pp. 448-453.
    [22] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, Sep. 2014.
    [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, “Going deeper with convolutions,” In Proc. IEEE Computer Vision and Pattern Recognition, Boston, June 2015, pp. 1-9.
    [24] J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, July 2017, pp. 6517-6525.
    [25] A. Cherubini, R. Passama, A. Meline, A. Crosnier and P. Fraisse, “Multimodal control for human-robot cooperation,” In Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Japan, Nov. 2013, pp. 2202-2207.
    [26] D. Kimura, R. Nishimura, A. Oguro and O. Hasegawa, “Ultra-fast Multimodal and Online Transfer learning on Humanoid Robots,” In Proc. ACM/IEEE International conference on Human-robot interaction, Japan, March 2013, pp. 165-166.
    [27] I. E. Makrini, S. A. Elprama, J. Bergh, B. Vanderborght, A. J. Knevels, C. Jewell, F. Stals, G. Coppel, I. Ravyse, J. Potargent, and J. Berte, “Working with walt: How a cobot was developed and inserted on an auto assembly line,” IEEE Robotics & Automation Magazine, vol. 25, no. 2, pp. 51-58, May 2018.
    [28] GTX1070顯示卡: https://tw.msi.com/Graphics-card/GEFORCE-GTX-1070-GAMING-X-8G/Gallery
    [29] 羅技C922攝影機: https://www.logitech.com/zh-tw/product/c922-pro-stream-webcam
    [30] Kinect for Xbox one: http://time.com/3195116/standalone-kinect-xbox-one/
    [31] UR3 6軸機械手臂: https://www.universal-robots.com/
    [32] UR3 機械手臂控制箱: http://www.nonead.com/intelligence_content/9657.html
    [33] Robotiq 2f85平行夾爪: https://robotiq.com/products/2f85-140-adaptive-robot-gripper
    [34] qb SoftHand五指夾爪: https://www.youtube.com/watch?v=0oX9eJsnVZc
    [35] ]UR5機械手臂運動學: http://rasmusan.blog.aau.dk/files/ur5_kinematics.pdf
    [36] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” In Proc. arXiv preprint arXiv:1804.02767, Apr. 2018.
    [37] V-REP虛擬環境: http://www.coppeliarobotics.com/
    [38] Tinkercard: https://www.tinkercad.com/

    無法下載圖示 電子全文延後公開
    2024/08/31
    QR CODE