Current deep neural network computations incur intensive memory accesses and thus limit the performance of current Von-Neumann architecture. To bridge the performance gap, Processing-In-Memory (PIM) architecture is widely advocated and crossbar accelerators with Resistive Random-Access Memory (ReRAM) are one of the intensively-studied solutions. However, due to the programming variation of ReRAM, crossbar accelerators suffer from the serious accuracy issue. To improve the accuracy, we propose an adaptive data representation strategy to minimize the analog variation errors caused by the programming variation of ReRAM. The proposed strategy was evaluated by a series of intensive experiments based on the data collected from real ReRAM chips, and the results show that the proposed strategy can improve the accuracy for around 20% for MNIST which is close to the ideal case and 40% for CIFAR10.