透過您的圖書館登入
IP:18.116.63.174
  • 學位論文

結合關鍵用語擷取與口述詞彙偵測之影像辨識

Image classification by combining key term extraction and spoken term detection

指導教授 : 李琳山
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


人類幼年時期透過視覺、聽覺就常常直接學到沒有被教導過的詞所代表的東西,進而去理解其相關含意或觀念。本論文希望用類似的方式,讓機器自動從網路上的影音資料中抽取若干知識,也能做到初步的學習。這也是有效運用網路資源的方法之一。例如網路上有廚藝教學影片、生態紀錄片、舞蹈教學影片等,如能有效運用這些資訊,相信對人類生活有很大幫助。 由於網路上的影片,大多缺乏妥善的標註,要讓機器直接學習這些影片並不容易,若是要給予影片標註,則要花費相當大量的人力成本,亦非上策。因此本論文提出了一個系統機制,透過影片旁白的關鍵用語擷取與口述詞彙偵測,自動為影片中的影格標註,同時自動從影片中找出重要的觀念作為類別,再將這些有自動標註的資料作為訓練資料,訓練出一個影像辨識模型,作為走向上述目標的第一步。

並列摘要


Children usually learn objects or concepts from visual and hearing input without being exactly taught about those objects or concepts. We hope machines can do something similar, i.e., learn something from unlabeled video and audio autometically. In the Internet era, abundant resources are available on the Internet. For example, the instruction and training videos about cooking, dancing and the environment on YouTube. We wish to be able to use them . Most of such videos on YouTube mentioned above are not labled, thus difficult to be used in training machines. Human annotation for these videos is expansive. This research therefore proposed a direction and develops a system, which performs key term extraction and spoken term detection over the audio, and uses the detected key terms to label the frames of the video automatically. It can also discover the important concepts in the videos, treating them as classes of images. We then use these labeled data to train an image classification model and reasonably good results can be obtained. A novel key term extraction approach based on the location of the terms and the context in the sentences was also proposed here, which was shown to be domain independent. In other words, once trained it can be used to extract key terms in unseen domains.

參考文獻


[10] Alex Krizhevsky and Geoffrey Hinton, “Learning multiple layers of features from tiny images,” 2009.
[1] Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in neural information processing systems, 1990, pp. 396–404.
[4] Felix A Gers, Ju ̈rgen Schmidhuber, and Fred Cummins, “Learning to forget: Con- tinual prediction with lstm,” 1999.
[5] Sheng-syun Shen and Hung-yi Lee, “Neural attention models for sequence classifi- cation: Analysis and application to key term extraction and dialogue act detection,” arXiv preprint arXiv:1604.00077, 2016.
[9] Tony Lindeberg, “Scale invariant feature transform,” Scholarpedia, vol. 7, no. 5, pp. 10491, 2012.

延伸閱讀