隨著圖形處理單元(GPU)技術的發展,深度學習在機器學習和自然語言處理(NLP)任務中得到了廣泛應用。 最近,NLP領域出現大量有關問答(QA)任務採用深度學習的文獻。 這些文獻大部分集中在英語的簡短回答和對話任務上。 本文將以回答中文的是/否問題當作研究主題,動機是對此主題採取深度學習作法的文獻較少。本文將基於公開的中文資料集,利用預訓練的中文BERT(Bidirectional Encoding Representation of Transformer)語言模型進行微調和評估。 結果顯示,在本文擴展的五份資料集中,本文採用的二分類及三分類模型,皆可在十次交叉驗證下得到良好的準確率。
As the Graphics Processing Unit (GPU) technology advances, there is a boom in the use of deep learning for tasks in Machine Learning and Natural Language Processing (NLP). Recently NLP has seen a considerable amount of literature about Question Answering (QA) using deep learning approaches. Most of these works concentrate on short answer and dialogue tasks in English. In this paper, answering the yes/no questions in Chinese using the deep learning approach will be our topic of research as it is less studied in the literature. Based on a public Chinese QA dataset, a pre-trained Chinese BERT (Bidirectional Encoding Representation of Transformer) language model will be fine-tuned and evaluated. The result shows that for the five expanded datasets, our two-class or three-class models can all obtain a good accuracy using the 10-fold cross validation. Our approach can achieve a high accuracy using 10-fold cross validation. We expanded our dataset to create five different datasets to use. In addition, we also provided the model to predict the three-class task.