One-class SVM and support vector data description (SVDD) are two effective outlier detection techniques. They have been successfully applied to many applications under the kernel settings, but for some high dimensional data, linear rather than kernel one-class SVM and SVDD may be more suitable. Past developments on kernel and linear classification have indicated that specially designed optimization algorithms can make the training for linear scenarios much faster. However, we point out that because of some differences from standard linear SVM, existing algorithms may not be suitable for one-class scenarios. We then develop some novel coordinate descent methods for linear one-class SVM and SVDD. Experiments demonstrate their superiority on the convergence speed.