When using deep learning for semantic analysis, machine translation, text classification and other relative applications in the field of Natural Language Processing (NLP), we must prepare and train the word embeddings in advance. The quality of word embeddings in each word vector will directly affect the accuracy rate of deep learning model. The aim of this study is to explore how to use the Word2Vec model to learn the vector representation of word embeddings when applying deep learning to natural language processing. After this preprocessing step, the word vectors will be input into the deep learning discriminant model to generate prediction results and to handle various interesting applications in the future.
為了持續優化網站功能與使用者體驗,本網站將Cookies分析技術用於網站營運、分析和個人化服務之目的。
若您繼續瀏覽本網站,即表示您同意本網站使用Cookies。