With the rise of research related to artificial intelligence, many machine learning technologies have gradually matured and have been applied in various fields one after another. However, in this case, the game field still has great room for development. The reason is the complexity of games. An agent's action may create many different situations, which not only greatly increases the complexity of the model, but also takes longer to train. Therefore, this study proposes a strategy-enhanced proximal policy optimization(SEPPO) that combines long short-term memory models with proximal policy optimization, formulates agent strategies based on features, and optimizes proximal policy optimization by combining long short-term memory models. We can use strategic judgment to make reinforcement learning achieve the same results faster. The experimental results confirmed by SEPPO in the field of games show that it can effectively reduce the problem of too long training time.