In Gibbs-sampling-based Bayesian classification, for each 'model' --i.e., for each combination of the number of clusters, degree of the corresponding polynomials, etc. --we find the posterior distribution corresponding to this model, and then we select the model with the largest marginal likelihood. For a small number of parameters, each of which has a few possible values, we can use exhaustive search to select the model. However, for a larger number of parameters, exhaustive search requires too long a computation time. So, faster model selection is needed. In this paper, we describe two possible methods for model selection: component-wise optimization and gradient ascent. We show that gradient ascent leads to faster model selection.
為了持續優化網站功能與使用者體驗,本網站將Cookies分析技術用於網站營運、分析和個人化服務之目的。
若您繼續瀏覽本網站,即表示您同意本網站使用Cookies。