Topic modeling techniques are widely used for text modeling and analysis. However, they suffer from the sparseness problem and the complex inference process, which can be alleviated by deep learning techniques such as bi-directional long short-term memory (LSTM) networks. To explore the combination of topic modeling and bi-directional LSTM, we propose a new probabilistic topic model, named GPU-LDA-LSTM. Differently from existing approaches, we first design a document semantic coding framework based on bi-directional LSTM (DSC-LSTM) to learn the representation of documents. Then, we utilize the document-topic and word-word dual-generalized Polya urn (GPU) mechanism to enhance semantics. Furthermore, a LSTM network is also used to improve the contextual consistency in the parameter inference process. Experimental results on two real-world datasets show that our model significantly outperforms state-of-the-art models on several evaluation metrics, suggesting that it can extract more meaningful topics.