對於現今中小型的資料集,梯度提升決策樹演算法(GBDT)在業界、學術界以及競賽被廣泛應用,此篇論文目的為比較目前最常使用的兩個GBDT套件,LightGBM與CatBoost,並找出兩個演算法之間效能差異的原因。為了讓比較具有公平性與一致性,我們根據一般現有真實資料集的特性設計了一個實驗,並根據此實驗的限制尋找資料集。實驗結果指出CatBoost在類別欄位較多的資料集確實預測效果更佳,而LightGBM則傾向於使用數值欄位來預測。在訓練時間上,LightGBM恆比CatBoost來的迅速。
On medium-sized datasets, Gradient Boosting Decision Tree(GBDT) methods have been proven to be effective both academically and competitively. This paper aims to investigate and compare the efficiency of the two most used GBDT methods, LightGBM and CatBoost, and discover the reason behind the performance difference. To make a fairer comparison, we designed an experiment based on data characteristic, and found several desirable raw datasets accordingly. The implementation indicates that CatBoost tends to perform better when the dataset has indeed more categorical columns, while LightGBM incline to use numerical columns to predict. For training speed, LightGBM is always faster than CatBoost under all circumstances.