Finding nearest neighbors without training in advance has many applications, such as image mosaic, image matching, image retrieval, and image stitching. When the quantity of data is huge and dimension is high, how to efficiently find the nearest neighbor (NN) becomes very important. In this article, we propose a variation of the KD-tree, Arbitrary KD-tree (KDA) which build tree without evaluate variances. Multiple KDA not only can be built efficiently it also processes an independent tree structures when data is large. Tested by extended synthetic databases and real-world SIFT data, we concluded that KDA method has advantages of satisfying accuracy performance in NN problem as well as computation efficiency.