透過您的圖書館登入
IP:3.147.54.6
  • 學位論文

提供具可解釋並改善評論缺漏問題之推薦系統

Explainable Recommendation System for Solving Review Loss

指導教授 : 柯佳伶

摘要


儘管以評論特徵為基礎的相關研究,證實能克服用戶-商品間評分資料稀疏的問題以提升評分預測效能,然而其並未考慮評論缺漏的問題。本論文參考採用評論之階層式注意力神經網路模型HANN,更改原模型中部分輸入特徵資訊,並調整不同層級注意力機制的權重計算方式;此模型稱為HANN-RPM,用來進行用戶對商品的評分預測。此外,另建立了一個以編碼器-解碼器架構為基礎的評論生成模型HANN-RGM,結合HANN-RPM的商品子網路架構為編碼器,不僅可用於對評分結果生成文字解釋內容,並可用於對用戶未撰寫評論的購買商品補充缺漏的評論後提供給HANN-RPM,進一步提升評分預測的效果。實驗結果顯示,不論有無缺漏評論的情況下,HANN-RPM皆較HANN有更佳評分預測效果。而當用戶具有評論缺漏的情況,透過HANN-RGM生成缺漏部份的評論補足,可令HANN-RPM預測出接近於無評論缺漏情況下的評分預測效果。此外,HANN-RGM模型透過擷取出前k筆評論中的商品語意資訊,比起NRT能生成出更長且更多樣性的評論內容,可作為評分預測之文字解釋。

並列摘要


In recently year, a variety of review-based recommendation systems have been developed to model user’s preference and item’s preference but few of researches considered review loss of users. In this paper, we present an approach for solving the problem of review loss by generating textual explanations to assist predicting user rating. Two frameworks of hierarchical attention learning models are proposed. The first one is a rating prediction model named HANN-RPM, which is extended from HANN model. We change the input features of in the inter-review GRU layer and adjust the computing method of attention mechanism in HANN to improve the effectiveness of feature extraction from reviews in the model. The second model is for review generation, named HANN-RGM, which is designed based on the encoder-decoder architecture. The hierarchical attention neural network for items learned from HANN-RPM is used to encode the latent representation of reviews for an item. Then a GRU neural network is employed to decode the latent representations into natural linguistic explanations by texts. The generated review not only provides explanation for user rating, but also used to fill the lost reviews of users for further improving the rating prediction of. Extensive experiments on real-world datasets of Amazon illustrate that our proposed model not only improves the rating prediction accuracy when the review-based recommendation system suffered from the problem of review loss, but also generates useful and fluent text explanations.

參考文獻


[1] Piotr Bojanowski, Edouard Grave, Armand Joulin and Tomas Mikolov, “Enriching Word Vectors with Subword Information”, Transactions of the Association for Computational Linguistics, vol. 5 , pp. 135—146, 2017.
[2] Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma, “Neural Attentional Rating Regression with Review-level Explanations,” In Proceedings of the 27th International Conference on World Wide Web, pp. 1583—1592, 2018.
[3] Zhongxia Chen, Xiting Wang, Xing Xie, Tong Wu, Guoqing Bu, Yining Wang and Enhong Chen, “Co-attentive multi-task learning for explainable recommendation,” in Proceedings of International Joint Conferences on Artificial Intelligence, pp. 1237—2143, 2019.
[4] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu and Hemal Shah, “Wide & deep learning for recommender systems,” in Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pp. 7—10, 2016.
[5] Dawei Cong, Yanyan Zhao, Bing Qin, Yu Han, Murray Zhang, Alden Liu and Nat Chen, “Hierarchical Attention based Neural Network for Explainable Recommendation,” In Proceedings of the 2019 on International Conference on Multimedia Retrieval, pp. 373—381, 2019.

延伸閱讀