Large-scale logistic regression is useful for document classification and computational linguistics. The L1-regularized form can be used for feature selection, but its non-differentiability causes more difficulties in training. Various optimization methods are proposed in recent years, but no serious comparison between them has been made. In this thesis we propose a trust region Newton method and compare several existing methods. Result shows that our method is competitive with some state-of-art L1-regularized logistic regression solvers. To investigate the applicability of L1-regularized logistic regression, we also conduct an experiment to show that compared to L2-regularized logistic regression, a sparser solution is obtained with similar accuracy.