Distributed deep learning plays an important role to develop human-intelligent computer system. Nowadays, studies have provided many distributed learning algorithms to speedup the training process. In these algorithms, the workers have to frequently exchange gradients for fast convergence. However, gradients exchange in a fixed period could cause inefficient data transmission. In this paper, we propose an efficient communication method to improve the performance of gossiping stochastic gradient descent algorithm. We decide the timing for communication according to the change of the local model. When the local model changes significantly, we push the models to other workers to calculate a new averaged result. Besides, we dynamically set a threshold for the communication period. With this efficient communication method, we can reduce communication overhead and thus improve the performance.