The goal of this paper is to report certain scientific discoveries about a Seq2Seq model. It is known that analyzing the behavior of RNN-based models at the neuron level is considered a more challenging task than analyzing a DNN or CNN models due to their recursive mechanism in nature. This paper aims to provide neuron-level analysis to explain why a vanilla GRU-based Seq2Seq model without attention can successfully output correct tokens in the correct order with a very high accuracy. We found two types of neurons set, storage neurons and count-down neurons, storing token and position information respectively. By analyzing how these two group of neurons transform through the time step and how they interact, we can uncover the mechanism of how to produce the right tokens in the right positions.