Simclus: an effective algorithm for clustering with a lower bound on similarity
Clustering algorithms generally accept a parameter k from the user, which determines the
number of clusters sought. However, in many application domains, like document
categorization, social network clustering, and frequent pattern summarization, the proper
value of k is difficult to guess. An alternative clustering formulation that does not require k is
to impose a lower bound on the similarity between an object and its corresponding cluster
representative. Such a formulation chooses exactly one representative for every cluster and …
number of clusters sought. However, in many application domains, like document
categorization, social network clustering, and frequent pattern summarization, the proper
value of k is difficult to guess. An alternative clustering formulation that does not require k is
to impose a lower bound on the similarity between an object and its corresponding cluster
representative. Such a formulation chooses exactly one representative for every cluster and …
Abstract
Clustering algorithms generally accept a parameter k from the user, which determines the number of clusters sought. However, in many application domains, like document categorization, social network clustering, and frequent pattern summarization, the proper value of k is difficult to guess. An alternative clustering formulation that does not require k is to impose a lower bound on the similarity between an object and its corresponding cluster representative. Such a formulation chooses exactly one representative for every cluster and minimizes the representative count. It has many additional benefits. For instance, it supports overlapping clusters in a natural way. Moreover, for every cluster, it selects a representative object, which can be effectively used in summarization or semi-supervised classification task. In this work, we propose an algorithm, SimClus, for clustering with lower bound on similarity. It achieves a O(log n) approximation bound on the number of clusters, whereas for the best previous algorithm the bound can be as poor as O(n). Experiments on real and synthetic data sets show that our algorithm produces more than 40% fewer representative objects, yet offers the same or better clustering quality. We also propose a dynamic variant of the algorithm, which can be effectively used in an on-line setting.
Springer