基 於 動 態 調 整 權 重 之 co-cluster

Translated Titles

Co-cluster with dynamic weighting





Key Words

文件分群 ; 文件分析 ; 資料探勘 ; 合作分群 ; Document Clustering ; Text Analysis ; Information Retrieval ; Co-Clustering



Volume or Term/Year and Month of Publication


Academic Degree Category




Content Language


Chinese Abstract

由於科技的進步,網路的發展,造成資訊量迅速攀升,然而這樣的進步卻相 對的造成使用者必須付出更多的時間去瀏覽所需的文件。有鑒於現今搜尋引擎的 廣泛使用,人們希望以更高的效率與效能取得資訊,其中分群的技術應用,扮演 著重要的角色。在搜尋的過程中,若能先將文件做好適當的分群,則可讓搜尋系 統提供更結構性的結果給使用者。如此一來,不僅可以減少搜尋文件的時間,更 可加快使用者找到自己想要的文件。 本研究利用Co-Clustering 的分群方法為基底並做更進一步的改良,針對分 群效能的改善以及feature 權重的增減加以討論,並且以Reuters、20newsgroup 及classic3 資料集做分析,萃取出核心關鍵字,並給予適當的權重,進而過濾一 些不必要的雜訊以及加強關鍵字的強度。利用座標的資訊,利用核心關鍵字在距 離群中心的距離為基礎做關鍵字之調整權重。接著,利用logistic function 的特性 對關鍵字之權重調整到介於0 與1 之間,再將關鍵字賦予調整後權重之後,再做 一次Co-Clustering,重複以上的動作達到收斂後,進而得到較高的分群結果。

English Abstract

This paper proposes a weighted co-clustering algorithm and applies it to document clustering problem. The weighted co-clustering is an extension of co-clustering, and it makes use of co-clustering properties to design a dynamic weighting algorithm for terms. Firstly, co-clustering presents both documents and words on the same coordinate system using spectral embedding technique. Secondly, co-clustering clusters documents and words simultaneously, so the documents that are within the same cluster should be clustered together with their corresponding words. Based on these two properties, the weighted co-clustering changes term weights iteratively. In addition, an outlier detection mechanism is proposed in this paper to eliminate outlier documents from clustering process. When the clustering process is completed, these outlier documents are assigned to appropriate clusters. We conduct experiments on three data sets and the experimental results show that the weighted co-clustering can effectively improve the performance.

Topic Category 基礎與應用科學 > 資訊科學
資訊學院 > 資訊科學與工程研究所
  1. [1] I. S. Dhillon, “Co-clustering documents and words using bipartite
  2. spectral graph partitioning,” in Proceedings of the seventh ACM
  3. SIGKDD international conference on Knowledge discovery and data
  4. [Online]. Available: http://doi.acm.org/10.1145/502512.502550
  5. [3] E. M. Voorhees. The effectiveness and efficiency of agglomerative
  6. hierarchic clustering in document retrieval. PhD thesis, Cornell
  7. text data using clustering. Machine Learning, 42(1):143–175,
  8. [5] H. Schutze and C. Silverstein. Projections for efficient document
  9. [6] T. Kohonen. Self-organizing Maps. Springer, 1995.
  10. [7] R. V. Katter. Study of document representations: Multidimensional
  11. scaling of indexing terms. System Development Corporation, Santa
  12. STATISTICAL SOCIETY, SERIES B, vol. 39, no. 1, pp. 1–38, 1977.
  13. [Online]. Available:
  14. Available: http://doi.acm.org/10.1145/331499.331504
  15. [10] Y. Chen, L. Wang, and M. Dong, “Non-negative matrix factorization
  16. [Online]. Available: http://dx.doi.org/10.1109/TKDE.2009.169
  17. [12] Y. Cheng and G. M. Church, “Biclustering of expression data,” in
  18. Proceedings of the Eighth International Conference on Intelligent
  19. [Online]. Available:
  20. [13] S. C. Madeira and A. L. Oliveira, “Biclustering algorithms for
  21. biological data analysis: A survey,” IEEE/ACM Trans. Comput. Biol.
  22. Bioinformatics, vol. 1, pp. 24–45, January 2004. [Online]. Available:
  23. [Online]. Available:
  24. [15] T. Hofmann, J. Puzicha, and M. I. Jordan, “Learning from dyadic data,”
  25. 1999, pp. 466–472.
  26. reinforcement chain and its application in query-oriented
  27. multi-document summarization,” in Proceedings of the 31st annual
  28. [18] U. Luxburg, “A tutorial on spectral clustering,” Statistics and
  29. kernel and spectral methods for clustering,” Pattern Recogn., vol.
  30. 41, pp. 176–190, January 2008.
  31. partitioning and data clustering,” in Proceedings of the tenth
  32. international conference on Information and knowledge management,
  33. [Online]. Available: http://doi.acm.org/10.1145/502585.502591
  34. [22] T. Li, “A general model for clustering binary data,” in Proceedings
  35. [Online]. Available: http://doi.acm.org/10.1145/1081870.1081894
  36. [23] W. Li and A. McCallum, “Semi-supervised sequence modeling with
  37. conference on Artificial intelligence - Volume 2. AAAI Press, 2005,
  38. pp. 813–818. [Online]. Available:
  39. [24] B. Long, Z. M. Zhang, and P. S. Yu, “Co-clustering by block value
  40. international conference on Knowledge discovery in data mining, ser.
  41. Available: http://doi.acm.org/10.1145/1081870.1081949
  42. SIGKDD international conference on Knowledge discovery and data
  43. [Online]. Available: http://doi.acm.org/10.1145/1150402.1150420
  44. international conference on Knowledge discovery and data mining, ser.
  45. algorithm to hierarchically co-cluster documents and words,” in
  46. ser. WWW ’03. New York, NY, USA:
  47. “Adapting spectral co-clustering to documents and terms using latent
  48. Conference on Advances in Artificial Intelligence, ser. AI ’09.
  49. [30] G. Bisson and F. Hussain, “Chi-sim: A new similarity measure for the
  50. International Conference on Machine Learning and Applications.
  51. Washington, DC, USA: IEEE Computer Society, 2008, pp. 211–217.
  52. [Online]. Available:
  53. [31] K. M. Hall. An r-dimensional quadratic placement algorithm.
  54. Management Science, 11(3):219–229, 1970.
  55. [32] W. E. Donath and A. J. Hoffman. Lower bounds for the partitioning of
  56. [33] M. Fiedler. Algebraic connectivity of graphs. Czecheslovak
  57. Mathematical Journal, 23:298–305, 1973.
  58. [37] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE
  59. [38] T. Hofmann, “Unsupervised learning by probabilistic latent semantic
  60. ACM International Conference on Web Search and Data Mining. New York,
  61. mining, ser. KDD ’01. New York,NY, USA: ACM, 2001, pp. 269–274.
  62. [2] G. Salton and M. J. McGill. Introduction to Modern Retrieval.
  63. McGraw-Hill Book Company, 1983.
  64. University, 1986.
  65. [4] I. S. Dhillon and D. S. Modha. Concept decompositions for large sparse
  66. January 2001. Also appears as IBM Research Report RJ 10147, July 1999.
  67. clustering. In ACM SIGIR, 1997.
  68. Monica, CA, 1967.
  69. [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood
  70. from incomplete data via the em algorithm,”JOURNAL OF THE ROYAL
  71. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=
  72. [9] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data clustering: a review,”
  73. ACM Comput. Surv., vol. 31, pp. 264–323,September 1999. [Online].
  74. for semisupervised heterogeneous data co-clustering,” IEEE Trans.
  75. on Knowl. and Data Eng., vol. 22, pp. 1459–1474, October 2010.
  76. [11] I. S. Dhillon, S. Mallela, and D. S. Modha, “Information-theoretic
  77. co-clustering,” in Proceedings of the ninth ACM SIGKDD international
  78. conference on Knowledge discovery and data mining, ser. KDD ’03. New
  79. York, NY, USA: ACM, 2003, pp. 89–98.
  80. 52
  81. Systems for Molecular Biology. AAAI Press, 2000, pp. 93–103.
  82. http://portal.acm.org/citation.cfm?id=645635.660833
  83. http://dx.doi.org/10.1109/TCBB.2004.2
  84. [14] S. Busygin, O. Prokopyev, and P. M. Pardalos, “Biclustering in data
  85. mining,” Comput. Oper. Res., vol. 35, pp. 2964–2987, September 2008.
  86. http://portal.acm.org/citation.cfm?id=1343112.1343210
  87. in Proceedings of the 1998 conference on Advances in neural
  88. information processing systems II. Cambridge, MA, USA: MIT Press,
  89. [16] X. Zhu, Z. Ghahramani, and J. D. Lafferty, “Semi-supervised learning
  90. using gaussian fields and harmonic functions,” in ICML, 2003, pp.
  91. 912–919.
  92. [17] F. Wei, W. Li, Q. Lu, and Y. He, “Query-sensitive mutual
  93. international ACM SIGIR conference on Research and development in
  94. information retrieval, ser. SIGIR ’08. New York, NY, USA: ACM, 2008,
  95. pp. 283–290.
  96. Computing, vol. 17, pp. 395–416, December 2007. [Online].
  97. Available: http://portal.acm.org/citation.cfm?id=1288822.1288832
  98. [19] W. Li, W.-K. Ng, Y. Liu, and K.-L. Ong, “Enhancing the effectiveness
  99. of clustering with spectra analysis,” IEEE Trans. on Knowl. and Data
  100. Eng., vol. 19, pp. 887–902, July 2007.
  101. [20] M. Filippone, F. Camastra, F. Masulli, and S. Rovetta, “A survey of
  102. [21] H. Zha, X. He, C. Ding, H. Simon, and M. Gu, “Bipartite graph
  103. ser. CIKM ’01. New York, NY, USA: ACM, 2001, pp. 25–32.
  104. 53
  105. of the eleventh ACM SIGKDD international conference on Knowledge
  106. discovery in data mining, ser. KDD ’05. New York, NY, USA: ACM, 2005,
  107. pp. 188–197.
  108. syntactic topic models,” in Proceedings of the 20th national
  109. http://portal.acm.org/citation.cfm?id=1619410.1619463
  110. decomposition,” in Proceedings of the eleventh ACM SIGKDD
  111. KDD ’05. New York, NY, USA: ACM, 2005, pp. 635–640. [Online].
  112. [25] C. Ding, T. Li, W. Peng, and H. Park, “Orthogonal nonnegative matrix
  113. t-factorizations for clustering,” in Proceedings of the 12th ACM
  114. mining, ser. KDD ’06. New York, NY, USA: ACM, 2006, pp. 126–135.
  115. [26] B. Long, X. Wu, Z. M. Zhang, and P. S. Yu, “Unsupervised learning
  116. on k-partite graphs,” in Proceedings of the 12th ACM SIGKDD
  117. KDD ’06. New York, NY, USA: ACM, 2006, pp. 317–326.
  118. [27] B. Mandhani, S. Joshi, and K. Kummamuru, “A matrix density based
  119. Proceedings of the 12th international conference on World Wide Web,
  120. ACM, 2003, pp. 511–518.
  121. [28] L. A. Park, C. A. Leckie, K. Ramamohanarao, and J. C. Bezdek,
  122. semantic analysis,” in Proceedings of the 22nd Australasian Joint
  123. Berlin, Heidelberg: Springer-Verlag, 2009, pp. 301–311.
  124. [29] T. K. Landauer, P. W. Foltz, and D. Laham, “Introduction to latent
  125. semantic analysis,” Discourse Processes, vol. 25, pp. 259–284,
  126. 1998.
  127. 54
  128. co-clustering task,” in Proceedings of the 2008 Seventh
  129. http://portal.acm.org/citation.cfm?id=1491260.1491396
  130. graphs. IBM Journal of Research and Development, 17:420–425, 1973.
  131. [34] A. Pothen, H. Simon, and K.-P. Liou. Partitioning sparse matrices with
  132. eigenvectors of graphs. SIAM Journal on Matrix Analysis and
  133. Applications, 11(3):430–452, July 1990.
  134. [35] Rayleigh-Ritz theorem.
  135. [36] L. W. Hagen and A. B. Kahng, “New spectral methods for ratio cut
  136. partitioning and clustering.” IEEE Trans. on CAD of Integrated
  137. Circuits and Systems, vol. 11, no. 9, pp. 1074–1085, 1992.
  138. Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8,pp. 888–905, 2000.
  139. analysis,” Mach. Learn., vol. 42, no. 1-2, pp. 177–196,2001.
  140. [39] C. D. Manning, P. Raghavan, and H. Schtze, Introduction to Information
  141. Retrieval. New York, NY, USA: Cambridge University Press, 2008.
  142. [40] D. Ramage, P. Heymann, C. D. Manning, and H. Garcia-Molina,
  143. “Clustering the tagged web,” in WSDM ’09: Proceedings of the Second
  144. NY, USA: ACM, 2009, pp.54–63.
Times Cited
  1. 詹靜惠(2011)。過敏性鼻炎兒童醫療遵從行為及其相關因素之研究。臺灣師範大學健康促進與衛生教育學系在職進修碩士班學位論文。2011。1-131。