At present, attention‐based emotion classification models mostly use neural networks to learn contextual semantics of text word vectors, and then apply the attention mechanism to the output layer of the classification model to capture key information, which will cause the model to focus on irrelevant attribute words and attention weight dispersed. In order to make better use of the attention mechanism, a BiLSTM text sentiment classification model based on pre‐attention is proposed. The model first uses the attention mechanism to assign weights to different word vectors in the text sequence, and then input the weighted word vectors into BiLSTM performs long‐distance semantic feature learning, and finally take sentiment classification. Using the ChnSentiCorp data set experiment, the results show that the model can focus on adjectives and negative words, also has a certain effect on connectives. Compared with the conventional model that includes attention mechanism, the classification accuracy is improved by 1.7%.