透過您的圖書館登入
IP:3.138.200.66
  • 學位論文

利用多查詢記憶網路學習詞袋文件表達法

Learning Bag-of-words Document Representation with Multi-queries Memory Networks

指導教授 : 鄭卜壬

摘要


文本向量能提供有效且基於統計的資訊,並廣泛地被應用在文字領域,例如:網頁搜尋、問答系統、文本相似度上。大部份既有的方法以文本詞頻作為特徵,並依靠詞向量衡量全域性的重要性。然而,文章中詞的重要性不只需要考慮詞頻以及全域性的重要性,也必須考慮該文章本身的含義。在這篇論文中,我們提出一個基於注意力機制的非監督式預測模型來衡量每個字在文本中的重要性。並且,考慮一個文本能有多種解釋方式,我們使用多查詢記憶網路來抽取各種面向的文本含義,並使用循環式及門來匯集各種含義。最後,我們使用了兩個公開資料集去驗證我們的模型,實驗結果顯示我們的方法能夠顯著的超越相關方法中的頂尖技術。

並列摘要


Document representation provide essential statistical information compressed features for many tasks in the text domain, e.g., web search, question answering, document similarity and relevance judgement. Current methods use term frequencies as local features and rely on word embeddings to measure the global importance. However, the importance of words in a document might depend on the meaning of the document and can not globally measured. In this work, we propose an attention-based unsupervised predictive model to reweight the importance of words in a document. Also, considering the multiple interpretations of a single document, we multi-queries memory networks to extract the semantics in different views. And we use recurrent and gating method to combine the semantics. The experimental results show our proposed model outperforms the state-of-the-art works on two benchmark datasets.

參考文獻


[1] Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022.
[2] Chen, M. (2017). Efficient vector representation for documents through corruption. arXiv preprint arXiv:1707.02377.
[3] Glover, J. (2016). Modeling documents with generative adversarial networks. arXiv preprint arXiv:1612.09122.
[4] Hinton, G. E. and Salakhutdinov, R. R. (2009). Replicated softmax: an undirected topic model. In Advances in neural information processing systems, pages 1607–1614.
[5] Holmer, E. and Marfurt, A. (2018). Explaining away syntactic structure in semantic document representations. arXiv preprint arXiv:1806.01620.

延伸閱讀