When using deep learning for semantic analysis, machine translation, text classification and other relative applications in the field of Natural Language Processing (NLP), we must prepare and train the word embeddings in advance. The quality of word embeddings in each word vector will directly affect the accuracy rate of deep learning model. The aim of this study is to explore how to use the Word2Vec model to learn the vector representation of word embeddings when applying deep learning to natural language processing. After this preprocessing step, the word vectors will be input into the deep learning discriminant model to generate prediction results and to handle various interesting applications in the future.