Learning interpretable representations has been among the topics of interest. Most existing works cannot easily generate or manipulate latent representations which semantically match the images of interest via interpolation. In this paper, we propose an Angular Triplet-Neighbor Loss (ATNL), which is able to derive latent representations whose distribution would match the semantic information. With the latent space guided by ATNL, we further utilize spherical semantic interpolation for generating semantic warping of images. Our experiments on both MNIST and CMU Multi-PIE datasets confirm the effectiveness and robustness of our ATNL and spherical semantic interpolation over recent representation learning models.