透過您的圖書館登入
IP:3.138.113.188
  • 學位論文

文本意圖的多模態分析:以Instagram為例

An Analysis of Multimodal Document Intent in Instagram Posts

指導教授 : 謝舒凱
本文將於2025/08/25開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


時至今日,社群媒體(如Instagram)趨向結合圖片以及文字表徵,建構出一種新的「多模態」溝通方式。利用計算方法分析多模態關係已成為一個熱門的主題,然而,尚未有研究針對台灣的百大網紅發文中的多模態圖文配對(Image-caption Pair)來分析文本意圖和圖文關係。利用文字和圖片的多模態表徵,本研究沿用 Kruk et al. (2019)的圖文關係分類方法(contextual relationship/semiotic relationship/authors intent),對此三種分類提出新的圖文表徵方式(Sentence-BERT及image embedding),並利用計算模型(Random Forest, Decision Tree Classifier)精準分類以上三種圖文關係,研究結果顯示正確率高達86.23%。

並列摘要


A majority of representation style on social media (i.e., Instagram) tends to combine visual and textual content in the same message as a consequence of building up a modern way of communication. Message in multimodality is essential in almost any types of social interactions especially in the context of social multimedia content on- line. Hence, effective computational approaches for understanding documents with multiple modalities needed to identify the relationship between them. This study extends recent advances in intent classification by putting forward an approach us- ing Image-caption Pairs (ICPs). Several Machine Learning algorithm like Decision Tree Classifier (DTC’s), Random Forest (RF) and encoders like Sentence-BERT and picture embedding are undertaken in the tasks in order to classify the relation- ships between multiple modalities, which are 1) contextual relationship 2) semiotic relationship and 3) authors intent. This study points to two results. First, despite the prior studies consider incorporating the two synergistic modalities in a com- bined model will improve the accuracy in the relationship classification task, this study found out the simple fusion strategy that linearly projects encoded vectors from both modalities in the same embedding space may not strongly enhance the performance of that in single modality. The results suggest that the incorporating of text and image needs more effort to complement each other. Second, we show that these text-image relationships can be classified with high accuracy (86.23%) by using only text modality. In sum, this study may be of essential in demonstrating a computational approach to access multimodal documents as well as providing a better understanding of classifying the relationships between modalities.

參考文獻


Austin, J. L. (1975). How to do things with words (Vol. 88). Oxford university press. Baecchi, C., Uricchio, T., Bertini, M., Del Bimbo, A. (2016). A multimodal feature learning approach for sentiment analysis of social network multimedia. Multimedia Tools and Applications, 75(5), 2507–2525.
Baltrušaitis, T., Ahuja, C., Morency, L.-P. (2018). Multimodal machine learning: a survey and taxonomy. IEEE transactions on pattern analysis and machine intelligence, 41(2), 423–443.
Barbieri, F., Ballesteros, M., Ronzano, F., Saggion, H. (2018). Multimodal emoji prediction. arXiv preprint arXiv:1803.02392.
Bateman, J. (2008). Multimodality and genre: a foundation for the systematic analysis of multimodal documents. Springer.
Bateman, J. (2014). Text and image: a critical introduction to the visual/verbal divide. Routledge.

延伸閱讀