透過您的圖書館登入
IP:3.149.214.32
  • 學位論文

基於自適內容選擇的學習模型應用於棒球影片分類

A Learning Model for Classification of Baseball Videos based on Adaptive Content Selection

指導教授 : 李明穗
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


棒球是世界上最受歡迎的運動之一,每年都有龐大的商機,相關的科技也蓬勃發展。MLB-YouTube是個更加細分的棒球動作識別資料集,比一般的動作識別資料集都還要更困難一些,因為影片中場景非常類似且每個類別的差異非常微小。在這篇論文,我們微調了一個帶有attention機制的LSTM模型,讓模型更適用於MLB-YouTube資料集,並且引入adaptive content selection,幫助模型更專注在球員及裁判的動作。此外,我們也對資料集做了兩個改進,第一個是原本的資料集在短打及觸身球的影片數量非常少,所以我們從網路上再蒐集了許多這兩個類別的影片,讓資料集更加完整。第二個是我們定義了新的分類方式,改成由許多個動作組合成一個事件,再以事件來做分類,這個新的定義也有助於提升影片分類的準確率。我們提出的方法在原本的分類定義上,提升了6.1%的準確度(mAP)。在新的分類定義上,提升了17.3%的準確度(accuracy)。

並列摘要


Baseball is one of the most popular sports in the world and has huge business opportunities every year. The technologies of baseball are also booming. MLB-YouTube is a fine-grain action recognition dataset, which is more difficult than normal action recognition datasets because the scenes are very similar and the differences in each class are very small. In this thesis, we use and slightly adjust the attentive-LSTM model to make the model more suitable for the MLB-YouTube dataset, and introduce the adaptive content selection to help the model more focus on the actions of the players and the umpire. In addition, we have also made two improvements to the MLB-YouTube dataset. The first is that this dataset has very few videos about bunt and hit-by-pitch so we collecte many videos of these two class from the Internet to make the dataset more complete. The second is that we define new classes by the events in the baseball game. Each event is combined by several activity class, and the model classify videos by event. This new class definition is also helpful. The proposed approach outperforms the state-of-the-art by 6.1% of mAP on original class definition and 17.3% of accuracy on the new class definition.

參考文獻


A. J. Piergiovanni and M. S. Ryoo, “Fine-grained activity recognition in baseball videos,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2018-June, pp. 1821–1830, 2018.
G. Kanojia, S. Kumawat, and S. Raman, “Attentive spatio-temporal representation learning for diving classification,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2019-June, pp.
K. Soomro, A. R. Zamir, and M. Shah, “UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild,” no. November, 2012. [Online]. Available: http://arxiv.org/abs/1212.0402
H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, “Hmdb51: A large video database for human motion recognition,” 11 2011, pp. 2556–2563.
W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman, “The Kinetics Human Action Video Dataset,” 2017. [Online]. Available: http://arxiv.org/abs/1705.06950

延伸閱讀