電腦自動對話系統現在越來越普遍,許多公司都在使用。但是現在的對話系統大多使用基於規則或是檢索的方式來產生回覆,這種回答的內容皆是用預定的回覆庫,所以回答較無多樣性。 近幾年來有許多人研究以生成(generation)的方式產生回覆句,本篇研究使用深度學習技術序列對序列(Seq2seq)來解決短文本對話(Short text conversation, STC)生成的問題。並且參與了NTCIR-13中STC-2議題的對話生成(Generation-based)任務。我們使用了NTCIR主辦單位所給予的簡體中文資料集。由於多數資料是沒有整理成一個輸入對應一個輸出,本篇研究先使用檢索方式,建立大量輸入句與對應句的句子對來做訓練集。此外,基本的Seq2seq所產生的回覆是固定的一句,無法產生不同的句子。因此我們加入一套回饋機制,從前一句的回覆中抓取資訊加入到輸入中,以此方式來產生不同的回覆。實驗中使用了TensorFlow的LSTM和GRU兩種單位元,並且比較兩種方式的收斂速度以及實驗結果。
Dialogue systems are quite common and used by many companies. But most of the dialogue system use rule-based or retrieve-based approach to reply users. The replies are predefined with less diversity. In recent years, many studies try to build systems that can generate responses. In this study, we use advanced learning technology Seq2seq to build such a system that can generate a short dialog (Short text conversation, STC). We participate the NTCIR-13 on STC-2 Generation-based sub-task. Organizers provided simplified Chinese data set. But most of the data is not sorted into a post-to-comment corresponding pairs. Therefore, we firstly use retrieval methods access to build a training sets. Then the Seq2seq method is used to generate responses. The original seq2seq model can generate only one sentence, cannot produce a different sentence, so we add a feedback mechanism that our system extracts the information in the generated response and adds to the input, such that seq2seq model can produce a different response. We conduct experiments using the LSTM and the GRU units in TensorFlow, and compare the convergence speeds and results.