透過您的圖書館登入
IP:18.226.93.137

並列摘要


The traditional PNN believes that all the variables have the same status, making the contour of probabilistic density function round. In this study, variable weights are added into the probabilistic density function of Elliptical Probabilistic Neural Network (EPNN), so that the kernel function can be adjusted into arbitrary hyper-ellipse to match the various shapes of classification boundaries. Although there are three kinds of network parameters in EPNN, including variable weights representing the importance of input variables, the core-width-reciprocal representing the effective range of data, and data weights representing the data reliability, in this study the principle of minimizing error sum of squares is used to derive the supervised learning rules for all the parameters with a unified mathematic theoretical framework. The kernel shape parameters of EPNN can be adjusted based on the supervised learning rules, and reflects the importance of the input variables on the classification. Hence, this study further derives the relationship between the kernel shape parameters and the importance index. The results show that (1) EPNN is much more accurate than PNN and slightly less accurate than MLP for the artificial classification functions; (2) EPNN is more accurate than MLP and PNN for the actual classification applications; (3) the importance index can indeed measure the importance of input variables.

延伸閱讀