##
**Semi-supervised clustering with metric learning: an adaptive kernel method.**
*(English)*
Zbl 1192.68623

Summary: Most existing representative works in semi-supervised clustering do not sufficiently solve the violation problem of pairwise constraints. On the other hand, traditional kernel methods for semi-supervised clustering not only face the problem of manually tuning the kernel parameters due to the fact that no sufficient supervision is provided, but also lack a measure that achieves better effectiveness of clustering. In this paper, we propose an adaptive Semi-supervised Clustering Kernel Method based on Metric learning (SCKMM) to mitigate the above problems. Specifically, we first construct an objective function from pairwise constraints to automatically estimate the parameter of the Gaussian kernel. Then, we use pairwise constraint-based K-means approach to solve the violation issue of constraints and to cluster the data. Furthermore, we introduce metric learning into nonlinear semi-supervised clustering to improve separability of the data for clustering. Finally, we perform clustering and metric learning simultaneously. Experimental results on a number of real-world data sets validate the effectiveness of the proposed method.

### MSC:

68T10 | Pattern recognition, speech recognition |

PDF
BibTeX
XML
Cite

\textit{X. Yin} et al., Pattern Recognition 43, No. 4, 1320--1333 (2010; Zbl 1192.68623)

Full Text:
DOI

### References:

[1] | Bar-Hillel, A.; Hertz, T.; Shental, N.; Weinshall, D., Learning a Mahalanobis metric from equivalence constraints, Journal of machine learning research, 6, 937-965, (2005) · Zbl 1222.68140 |

[2] | K. Wagstaff, C. Cardie, S. Rogers, S. Schroedl, Constrained K-means clustering with background knowledge, in: Proceedings of the 18th International Conference on Machine Learning, San Francisco, 2001, pp. 577-584. |

[3] | A. Bar-Hillel, T. Hertz, N. Shental, D. Weinshall, Learning distance functions using equivalence relations, in: Proceedings of the 20th International Conference on Machine Learning, Washington, DC, USA, 2003, pp. 11-18. · Zbl 1222.68140 |

[4] | S. Basu, A. Banerjee, R.J. Mooney, Semi-supervised clustering by seeding, in: Proceedings of the 19th International Conference on Machine Learning, Sydney, Australia, 2002, pp. 19-26. |

[5] | E.P. Xing, A.Y. Ng, M.I. Jordan, S. Russell, Distance metric learning, with application to clustering with side-information, Advances in Neural Information Processing Systems, vol. 15, Cambridge, MA, 2002, pp. 505-512. |

[6] | S. Basu, M. Bilenko, R.J. Mooney, A probabilistic framework for semi-supervised clustering, in: Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, 2004, pp. 59-68. |

[7] | M. Bilenko, S. Basu, R.J. Mooney, Integrating constraints and metric learning in semi-supervised clustering, in: Proceedings of the 21st International Conference on Machine Learning, Banff, Canada, 2004, pp. 81-88. |

[8] | W. Tang, H. Xiong, S. Zhong, J. Wu, Enhancing semi-supervised clustering: a feature projection perspective, in: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, California, USA, 2007, pp. 707-716. |

[9] | Yeung, D.Y.; Chang, H., Extending the relevant component analysis algorithm for metric learning using both positive and negative equivalence constraints, Pattern recognition, 39, 5, 1007-1010, (2006) · Zbl 1158.68483 |

[10] | B. Kulis, S. Basu, I. Dhillon, R.J. Mooney, Semi-supervised graph clustering: a kernel approach, in: Proceedings of the 22st International Conference on Machine Learning, 2005, pp. 457-464. |

[11] | B. Yan, C. Domeniconi, An adaptive kernel method for semi-supervised clustering, in: Proceedings of the 17th European Conference on Machine Learning, Berlin, Germany, 2006, pp. 18-22. |

[12] | Yeung, D.Y.; Chang, H., A kernel approach for semi-supervised metric learning, IEEE transactions on neural networks, 18, 1, 141-149, (2007) |

[13] | I.W. Tsang, P.M. Cheung, J.T. Kwok, Kernel relevant component analysis for distance metric learning, in: Proceedings of the International Joint Conference on Neural Networks, Montreal, Canada, 2005, pp. 954-959. |

[14] | M. Wu, B. Schölkopf, A local learning approach for clustering, Advances in Neural Information Processing Systems, vol. 19, Mass. USA, 2007, pp. 1529-1536. |

[15] | S.J. Kim, A. Magnani, S. Boyd, Optimal kernel selection in kernel fisher discriminant analysis, in: Proceedings of the International Conference on Machine Learning, 2006, pp. 465-472. |

[16] | Z. Lu, T. Leen, Semi-supervised learning with penalized probabilistic clustering, Advances in Neural Information Processing Systems, vol. 17, Vancouver, Canada, 2005, pp. 849-856. |

[17] | B. Nelson, I. Cohen. Revisiting probabilistic models for clustering with pair-wise constraints, in: Proceedings of the 19th International Conference on Machine Learning, Corvalis, Oregon, USA, 2007, pp. 673-680. |

[18] | B. Yan, C. Domeniconi, Kernel Optimization using Pairwise Constraints for Semi-supervised Clustering, Technical Report ISE-TR-06-09, George Mason University, 2006. |

[19] | J.H. Chen, Z. Zhao, J.P. Ye, H. Liu, Nonlinear adaptive distance metric learning for clustering, in: Proceedings of the the Thirteenth ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, San Jose, California, USA, 2007, pp. 123-132. |

[20] | D.Q. Zhang, Z.H. Zhou, S.C. Chen, Semi-supervised dimensionality reduction, in: Proceedings of the 7th SIAM International Conference on Data Mining, Minneapolis, 2007, pp. 629-634. |

[21] | Yin, X.S.; Hu, E.L.; Chen, S.C., Discriminative semi-supervised clustering analysis with pairwise constraints, Journal of software, 19, 11, 2791-2802, (2008), (in Chinese) · Zbl 1199.68249 |

[22] | H. Kim, H. Park, H.Y. Zha, Distance preserving dimension reduction using the QR factorization or the cholesky factorization, in: Proceedings of the 7th IEEE International on Conference on Bioinformatics and Bioengineering, 2007, pp. 263-269. |

This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.