Generally qualitative condition (the accuracy of the data) and quantitative condition (the amount of data) of the data can significantly affect the quality of a supervised learning model. However, in real-world applications it might not be feasible to always assume one can obtain large amount of high-quality datasets. This research assumes the situation that there is a only small amount of accurate training data available for learning, aiming at designing a transfer-learning based approach to utilize larger amount of noisy (in terms of labels and features) training data to improve the learning quality. This problem is non-trivial because the distribution in noisy training dataset is different from that of the testing data. In this thesis, we proposed a novel transfer learning algorithm, Noise-Label Transfer Learning (NLTL), to solve the problem. We exploit the information of labels and features from accurate and noise data, transferring the features into same domain and adjusting the weights of instances for learning. The experiment result shows NLTL could outperform the existing approaches.