Building a classification model from thousands of available predictor variables with a relatively small sample size presents challenges for most traditional classification algorithms. When the number of samples is much smaller than the number of predictors, there can be a multiplicity of good classification models. An ensemble classifier combines multiple single classifiers to improve classification accuracy. This paper overviews tree-based classifiers and compares the performance of the three ensemble classifiers: random forest (RF), classification by ensembles from random partitions (CERP), and adaptive boosting (AdaBoost), and three single tree algorithms are also evaluated, classification tree (CTree), classification rule with unbiased interaction selection and estimation (CRUISE), and quick, unbiased and efficient statistical tree (QUEST). The six tree-based classifiers are applied to five high-dimensional datasets. In all datasets, the three ensemble classifiers show much higher classification accuracies than the three single tree algorithms, with the exception of the AdaBoost ensemble classifier in one dataset. RF and CERP are comparable in terms of accuracy. The RF and CERP bagging classifiers show higher accuracies than the AdaBoost boosting classifier. For the three tree classifiers, QUEST generally shows higher accuracy than CTree and CRUISE.