在過去的幾十年中,變數挑選經常被使用於降維,根據特定的閥值從原始特徵集中選擇合適的特徵子集,在高維數據中選擇顯著的變量來優化模型識別和分類非常重要,因此 在許多研究和應用領域,數據挖掘技術非常依賴變數挑選,尤其是在機器學習算法中。 在本文中,我們提出了一種新的特徵選擇方法PBFS,PBFS使用隨機森林模型同時控制FDR來進行變數挑選,與其他現有的變數挑選方法相比,我們使用兩個真實數據集和四個模擬數據來評估我們提出的方法的有效性,發現多重共線性可能對所選變量產生很大影響。 一般來說,PBFS方法比其他四種特徵選擇方法具有優勢;此外,我們通過共現網絡分析的PBFS中引導聚合決策樹結果可視化變量之間的關係。
During the past decades, feature selection has been used in dimensionality reduction to select suitable feature subsets from the original set of features according to certain criteria. It is especially important to choose significant variables in high-dimensional data to improve model identification and classification accuracy. In many research and application areas, data mining techniques rely heavily on feature selection methods, especially in machine learning algorithms. In this thesis, a new feature selection approach called Permutation-Based Feature Selection (PBFS) is proposed by using a random forest model while controlling false discovery rate (FDR) to perform the feature selection. Two real datasets and four simulation studies are used to evaluate the effectiveness of our proposed approach compared to the other well-known existing feature selection methods. It was found that multicollinearity could have a great impact on the selected variables. In general, the PBFS method showed advantages over the other four feature selection methods. In addition, we visualized the relationship among variables through bagged decision trees results from PBFS based on the co-occurrence network analysis.