zbMATH — the first resource for mathematics

A family of unsupervised sampling algorithms. (English) Zbl 1436.62041
Ros, Frédéric (ed.) et al., Sampling Techniques for supervised or unsupervised tasks. Cham: Springer. Unsuperv. Semi-Superv. Learn., 45-81 (2020).
Summary: The chapter is organized as follows. An overview of unsupervised sampling method is provided in Sect. 3.2. The concepts shared by the algorithms in the family, fft, time optimization and relationship to coresets are introduced in Sect. 3.3. Then the three algorithms, DIDES, DENDIS, and ProTraS, are individually described in Sect. 3.4. Their common properties and differences are illustrated using synthetic data and analyzed in Sect. 3.5.
Finally, the main conclusions are stated in Sect. 3.6.
For the entire collection see [Zbl 1433.62016].
62D05 Sampling theory, sample surveys
Full Text: DOI
[1] Agarwal, P.K., Har-Peled, S., Varadarajan, K.R.: Approximating extent measures of points. J. ACM 51(4), 606-635 (2004). http://doi.acm.org/10.1145/1008731.1008736 · Zbl 1204.68240
[2] Al-Kateb, M., Lee, B.: Adaptive stratified reservoir sampling over heterogeneous data streams. Inf. Syst. 39(1), 199-216 (2014)
[3] Al-Kateb, M., Lee, B.S., Wang, X.S.: Adaptive-size reservoir sampling over data streams. In: 19th International Conference on Scientific and Statistical Database Management, SSBDM’07, pp. 22-22. IEEE, Piscataway (2007)
[4] Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1027-1035. Society for Industrial and Applied Mathematics, Philadelphia (2007) · Zbl 1302.68273
[5] Azzalini, A., Torelli, N.: Clustering via nonparametric density estimation. Stat. Comput. 17(1), 71-80 (2007)
[6] Bezdek, J.C.: Pattern Recognition with Fuzzy Objective Functions Algorithms. Plenum Press, New York (1981) · Zbl 0503.68069
[7] Celebi, M.E., Kingravi, H.A., Vela, P.A.: A comparative study of efficient initialization methods for the k-means clustering algorithm. Expert Syst. Appl. 40(1), 200-210 (2013)
[8] Chaudhuri, S., Das, G., Narasayya, V.: Optimized stratified sampling for approximate query processing. ACM Trans. Database Syst. 32(2), 9 (2007)
[9] Chehreghani, M., Abolhassani, H., Chehreghani, M.: Improving density-based methods for hierarchical clustering of web pages. Data Knowl. Eng. 67(1), 30-50 (2008)
[10] Chernoff, H.: A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Stat. 23(4), 493-507 (1952) · Zbl 0048.11804
[11] Chernoff, H.: A note on an inequality involving the normal distribution. Ann. Probab. 9, 533-535 (1981) · Zbl 0457.60014
[12] Chiang, M.C., Tsai, C.W., Yang, C.S.: A time-efficient pattern reduction algorithm for k-means clustering. Inf. Sci. 181(4), 716-731 (2011)
[13] Chiu, S.L.: Fuzzy model identification based on cluster estimation. J. Intell. Fuzzy Syst. 2, 267-278 (1994)
[14] Dolnicar, S., Leisch, F.: Segmenting markets by bagged clustering. Australas. Mark. J. 12(1), 51-65 (2004)
[15] Efraimidis, P.S., Spirakis, P.G.: Weighted random sampling with a reservoir. Inf. Process. Lett. 97(5), 181-185 (2006) · Zbl 1184.68620
[16] Epanechnikov, V.A.: Non-parametric estimation of a multivariate probability density. Theory Probab. Appl. 14(1), 153-158 (1969)
[17] Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, pp. 226-231. AAAI Press, Menlo Park (1996)
[18] Feldman, D., Faulkner, M., Krause, A.: Scalable training of mixture models via coresets. In: Advances in Neural Information Processing Systems, pp. 2142-2150 (2011)
[19] Gutmann, B., Kersting, K.: Stratified gradient boosting for fast training of conditional random fields. In: Proceedings of the 6th International Workshop on Multi-Relational Data Mining, pp. 56-68 (2007)
[20] Har-Peled, S., Mazumdar, S.: On coresets for k-means and k-median clustering. In: Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, STOC’04, pp. 291-300. ACM, New York (2004). https://doi.org/10.1145/1007352.1007400 · Zbl 1192.68904
[21] Hartigan, J.A.: Clustering Algorithms. Wiley, London (1975) · Zbl 0372.62040
[22] Hartigan, J.A., Wong, M.: A k-means clustering algorithm. Appl. Stat. 28, 100-108 (1979) · Zbl 0447.62062
[23] Hatamlou, A., Abdullah, S., Nezamabadi-pour, H.: A combined approach for clustering based on k-means and gravitational search algorithms. Swarm Evol. Comput. 6, 47-52 (2012)
[24] Hochbaum, D.S., Shmoys, D.B.: A best possible heuristic for the k-center problem. Math. Oper. Res. 10(2), 180-184 (1985) · Zbl 0565.90015
[25] Hodge, V.J., Austin, J.: A survey of outlier detection methodologies. Artif. Intell. Rev. 22(2), 85-126 (2004) · Zbl 1101.68023
[26] Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58(301), 13-30 (1963) · Zbl 0127.10602
[27] Ilango, M.R., Mohan, V.: A survey of grid based clustering algorithms. Int. J. Eng. Sci. Technol. 2(8), 3441-3446 (2010)
[28] Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recognit. Lett. 31(8), 651-666 (2010)
[29] Jiang, M.F., Tseng, S.S., Su, C.M.: Two-phase clustering process for outliers detection. Pattern Recognit. Lett. 22(6), 691-700 (2001) · Zbl 1010.68908
[30] Kärkkäinen, I., Fränti, P.: Dynamic local search algorithm for the clustering problem. Tech. Rep. A-2002-6, Department of Computer Science, University of Joensuu, Joensuu (2002) · Zbl 1118.68143
[31] Karypis, G., Han, E.H., Kumar, V.: Chameleon: hierarchical clustering using dynamic modeling. Computer 32(8), 68-75 (1999)
[32] Kaufman, L., Rousseeuw, P.: Clustering by means of medoids. In: Statistical Data Analysis Based on the L1-Norm and Related Methods. North-Holland, Amsterdam (1987)
[33] Kerdprasop, K., Kerdprasop, N., Sattayatham, P.: Density-biased clustering based on reservoir sampling. In: Proceedings Sixteenth International Workshop on Database and Expert Systems Applications, pp. 1122-1126. IEEE, Piscataway (2005)
[34] Khan, S.S., Ahmad, A.: Cluster center initialization algorithm for k-modes clustering. Expert Syst. Appl. 40(18), 7444-7456 (2013)
[35] Kollios, G., Gunopulos, D., Koudas, N., Berchtold, S.: Efficient biased sampling for approximate clustering and outlier detection in large data sets. IEEE Trans. Knowl. Data Eng. 15(5), 1170-1187 (2003)
[36] Krishnapuram, R., Keller, J.M.: A possibilistic approach to clustering. IEEE Trans. Fuzzy Syst. (1), 98-110 (1993)
[37] Leisch, F., Dolnicar, S.: Winter tourist segments in austria: identifying stable vacation styles using bagged clustering techniques. J. Travel Res. 41(3), 281-292 (2003)
[38] Linde, Y., Buzo, A., Gray, R.: An algorithm for vector quantizer design. IEEE Trans. Commun. 28(1), 84-95 (1980). https://doi.org/10.1109/TCOM.1980.1094577
[39] Ling, R.F.: Cluster analysis algorithms for data reduction and classification of objects. Technometrics 23(4), 417-418 (1981)
[40] Lloyd, S.P.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129-137 (1982) · Zbl 0504.94015
[41] Lv, Y., Ma, T., Tang, M., Cao, J., Tian, Y., Al-Dhelaan, A., Al-Rodhaan, M.: An efficient and scalable density-based clustering algorithm for datasets with complex structures. Neurocomputing 171, 9-22 (2015)
[42] Ma, X., Pan, Z., Li, Y., Fang, J.: High-quality initial codebook design method of vector quantisation using grouping strategy. IET Image Process. 9, 986-992 (2015)
[43] Machová, K., Puszta, M., Barčák, F., Bednár, P.: A comparison of the bagging and the boosting methods using the decision trees classifiers. Comput. Sci. Inf. Syst. 3(2), 57-72 (2006)
[44] Macqueen, J.: Some methods for classification and analysis of multivariate observations. In: In 5-th Berkeley Symposium on Mathematical Statistics and Probability, pp. 281-297 (1967) · Zbl 0214.46201
[45] Menardi, G., Azzalini, A.: An advancement in clustering via nonparametric density estimation. Stat. Comput. 24(5), 753-767 (2014) · Zbl 1322.62175
[46] Mitra, P., Murthy, C., Pal, S.: Density-based multiscale data condensation. IEEE Trans. Pattern Anal. Mach. Intell. 24(6), 734-747 (2002)
[47] Naldi, M., Campello, R.: Comparison of distributed evolutionary k-means clustering algorithms. Neurocomputing 163, 78-93 (2015)
[48] Nanopoulos, A., Manolopoulos, Y., Theodoridis, Y.: An efficient and effective algorithm for density biased sampling. In: Proceedings of the Eleventh International Conference on Information and knowledge Management, pp. 398-404 (2002)
[49] Nanopoulos, A., Theodoridis, Y., Manolopoulos, Y.: Indexed-based density biased sampling for clustering applications. Data Knowl. Eng. 57(1), 37-63 (2006)
[50] Palmer, C.R., Faloutsos, C.: Density biased sampling: an improved method for data mining and clustering. In: ACM SIGMOD Intl. Conference on Management of Data, Dallas, pp. 82-92 (2000)
[51] Rahman, M.A., Islam, M.Z.: A hybrid clustering technique combining a novel genetic algorithm with k-means. Knowl.-Based Syst. 71, 345-365 (2014)
[52] Ros, F., Guillaume, S.: Dendis: A new density-based sampling for clustering algorithm. Expert Syst. Appl. 56, 349-359 (2016). https://doi.org/10.1016/j.eswa.2016.03.008
[53] Ros, F., Guillaume, S.: Dides: a fast and effective sampling for clustering algorithm. Knowl. Inf. Syst. 50, 543-568 (2016). https://doi.org/10.1007/s10115-016-0946-8
[54] Ros, F., Guillaume, S.: Protras: a probabilistic traversing sampling algorithm. Expert Syst. Appl. 105, 65-76 (2018). https://doi.org/10.1016/j.eswa.2018.03.052
[55] Ros, F., Pintore, M., Deman, A., Chrétien, J.: Automatical initialization of RBF neural networks. Chemom. Intell. Lab. Syst. 87(1), 26-32 (2007)
[56] Rosenkrantz, D.J., Stearns, R.E., Lewis, P.M. II.: An analysis of several heuristics for the traveling salesman problem. SIAM J. Comput. 6(3), 563-581 (1977) · Zbl 0364.90104
[57] Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53-65 (1987) · Zbl 0636.62059
[58] Sarma, T., Viswanath, P., Reddy, B.: Speeding-up the kernel k-means clustering method: a prototype based hybrid approach. Pattern Recognit. Lett. 34(5), 564-573 (2013)
[59] Sarma, T.H., Viswanath, P., Reddy, B.E.: Speeding-up the kernel k-means clustering method: a prototype based hybrid approach. Pattern Recognit. Lett. 34(5), 564-573 (2013)
[60] Tan, S.C., Ting, K.M., Teng, S.W.: A general stochastic clustering method for automatic cluster discovery. Pattern Recognit. 44(10), 2786-2799 (2011)
[61] Tax, D., Duin, R.: Support vector data description. Mach. Learn. 54(1), 45-66 (2004) · Zbl 1078.68728
[62] Viswanath, P., Sarma, T., Reddy, B.: A hybrid approach to speed-up the k-means clustering method. Int. J. Mach. Learn. Cybern. 4(2), 107-117 (2013)
[63] Vitter, J.S.: Random sampling with a reservoir. ACM Trans. Math. Softw. 11(1), 37-57 (1985) · Zbl 0562.68028
[64] Wang, X., Wang, X., Wilkes, D.M.: A divide-and-conquer approach for minimum spanning tree-based clustering. IEEE Trans. Knowl. Data Eng. 21(7), 945-958 (2009)
[65] Xiao, Y., Liu, B., Hao, Z., Cao, L.: A k-farthest-neighbor-based approach for support vector data description. Appl. Intell. 41(1), 196-211 (2014)
[66] Yager, R.R., Filev, D.P.: Generation of fuzzy rules by mountain clustering. J. Intell. Fuzzy Syst. 2, 209-219 (1994)
[67] Yang, M.S., Wu, K.L.: A modified mountain clustering algorithm. Pattern Anal. Appl. 8(1-2), 125-138 (2005)
[68] Zahra, S., Ghazanfar, M.A., Khalid, A., Azam, M.A., Naeem, U., Prugel-Bennett, A.: Novel centroid selection approaches for kmeans-clustering based recommender systems. Inf. Sci. 320, 156-189 (2015)
[69] Zhong, C. · Zbl 1360.68730
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.