透過您的圖書館登入
IP:18.119.116.43
  • 學位論文

Unsupervised Clustering Based on Alpha-Divergence

Unsupervised Clustering Based on Alpha-Divergence

指導教授 : 黃聰明
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


none

關鍵字

none

並列摘要


Recently, many deep learning methods have been proposed to learning representations or clustering without labelled data. Using the famous ResNet[1] backbone as an effective feature extractor, we present a deep efficient clustering method that optimizes the data representation and learn the clustering map jointly. Despite the many successful applications of Kullback–Leibler divergence and Shannon entropy, we use alpha-divergence and Tsallis entropy to be an extension of the common loss functions. For detailed interpretation , we further analyze the relation between the clustering accuracy and the distinct alpha values. Also, we achieve 53.96% test accuracy on CIFAR-10[2] dataset, 27.24% accuracy on CIFAR-100-20[2] dataset in unsupervised tasks

參考文獻


[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. arXiv: 1512.03385, 2015.
[2] A. Krizhevsky and G. Hinton. Learning Multiple Layers of Features from Tiny Images. Mas ter’s Thesis, Department of Computer Science, University of Torono, 2009.
[3] A. Coates, A. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature
learning. In AISTATS, pages 215–223, 2009.