×

Learning from positive and unlabeled data: a survey. (English) Zbl 07224983

Summary: Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis and knowledge base completion. This article provides a survey of the current state of the art in PU learning. It proposes seven key research questions that commonly arise in this field and provides a broad overview of how the field has tried to address them.

MSC:

68T05 Learning and adaptive systems in artificial intelligence

Software:

ProDiGe; Aleph
PDF BibTeX XML Cite
Full Text: DOI arXiv

References:

[1] Basile, T. M., Di Mauro, N., Esposito, F., Ferilli, S., & Vergari, A. (2018). Density estimators for positive-unlabeled learning. In New frontiers in mining complex patterns: 6th international workshop, NFMCP 2017, held in conjunction with ECML-PKDD 2017, Skopje, Macedonia, September 18-22, 2017, Revised Selected Papers (Vol. 10785, pp. 49-64). Berlin: Springer.
[2] Bekker, J., & Davis, J. (2018a). Estimating the class prior in positive and unlabeled data through decision tree induction. In Proceedings of the 32th AAAI conference on artificial intelligence (pp. 2712-2719).
[3] Bekker, J.; Davis, J.; Lachiche, N.; Vrain, C., Positive and unlabeled relational classification through label frequency estimation, Inductive logic programming, 16-30 (2018), Cham: Springer, Cham
[4] Bekker, J., Robberechts, P., & Davis, J. (2019). Beyond the selected completely at random assumption for learning from positive and unlabeled data. In ECML PKDD: Joint European conference on machine learning and knowledge discovery in databases. Cham: Springer.
[5] Blanchard, G.; Lee, G.; Scott, C., Semi-supervised novelty detection, Journal of Machine Learning Research, 11, 2973-3009 (2010) · Zbl 1242.68205
[6] Blockeel, H. (2017). Pu-learning disjunctive concepts in ilp. In ILP 2017 late breaking papers.
[7] Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory (pp. 92-100). ACM.
[8] Breiman, L., Random forests, Machine Learning, 45, 1, 5-32 (2001) · Zbl 1007.68152
[9] Calvo, B.; Larrañaga, P.; Lozano, JA, Learning bayesian classifiers from positive and unlabeled examples, Pattern Recognition Letters, 28, 16, 2375-2384 (2007)
[10] Cerulo, L.; Elkan, C.; Ceccarelli, M., Learning gene regulatory networks from only positive and unlabeled data, BMC Bioinformatics, 11, 1, 228 (2010)
[11] Chang, S., Zhang, Y., Tang, J., Yin, D., Chang, Y., Hasegawa-Johnson, M. A., & Huang, T. S. (2016). Positive-unlabeled learning in streaming networks. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 755-764). ACM.
[12] Chapelle, O.; Scholkopf, B.; Zien, A., Semi-supervised learning, IEEE Transactions on Neural Networks, 20, 3, 542-542 (2009)
[13] Chaudhari, S., & Shevade, S. (2012). Learning from positive and unlabelled examples using maximum margin clustering. In Proceedings of the 19th international conference on neural information processing (Vol. 3, pp. 465-473). Berlin, Heidelberg: Springer.
[14] Chiaroni, F., Rahal, M. C., Hueber, N., & Dufaux, F. (2018). Learning with a generative adversarial network from a positive unlabeled dataset for image classification. In IEEE international conference on image processing.
[15] Claesen, M., Davis, J., De Smet, F., & De Moor, B. (2015a). Assessing binary classifiers using only positive and unlabeled data. arXiv preprint arXiv:1504.06837.
[16] Claesen, M., De Smet, F., Gillard, P., Mathieu, C., & De Moor, B. (2015b). Building classifiers to predict the start of glucose-lowering pharmacotherapy using Belgian health expenditure data. arXiv preprint arXiv:1504.07389.
[17] Claesen, M., Smet, F. D., Gillard, P., Mathieu, C., & Moor, B. D. (2015c). Building classifiers to predict the start of glucose-lowering pharmacotherapy using Belgian health expenditure data. CoRR arXiv:abs/1504.07389.
[18] Claesen, M.; Smet, FD; Suykens, JAK; Moor, BD, A robust ensemble approach to learn from positive and unlabeled data using SVM base models, Neurocomputing, 160, 73-84 (2015)
[19] Denis, F.; Gilleron, R.; Letouzey, F., Learning from positive and unlabeled examples, Theoretical Computer Science, 348, 1, 70-83 (2005) · Zbl 1081.68081
[20] Denis, F., Laurent, A., Gilleron, R., & Tommasi, M. (2003). Text classification and co-training from positive and unlabeled examples. In Proceedings of the ICML 2003 workshop: The continuum from labeled to unlabeled data (pp. 80-87).
[21] du Plessis, M. C., Niu, G., & Sugiyama, M. (2014). Analysis of learning from positive and unlabeled data. In Advances in neural information processing systems (pp. 703-711).
[22] Du Plessis, M., Niu, G., & Sugiyama, M. (2015a). Convex formulation for learning from positive and unlabeled data. In International conference on machine learning (pp. 1386-1394).
[23] du Plessis, M., Niu, G., & Sugiyama, M. (2015b). Class-prior estimation for learning from positive and unlabeled data. In Proceedings of the 7th Asian conference on machine learning (pp. 221-236). · Zbl 1412.68181
[24] du Plessis, MC; Sugiyama, M., Semi-supervised learning of class balance under class-prior change by distribution matching, Neural Networks: The Official Journal of the International Neural Network Society, 50, 110-9 (2012) · Zbl 1298.68268
[25] Du Plessis, MC; Sugiyama, M., Class prior estimation from positive and unlabeled data, IEICE Transactions on Information and Systems, 97, 5, 1358-1362 (2014)
[26] Elkan, C. (2001). The foundations of cost-sensitive learning. In Proceedings of the seventeenth international joint conference on artificial intelligence (Vol. 17, pp. 973-978). Lawrence Erlbaum Associates Ltd.
[27] Elkan, C., & Noto, K. (2008). Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 213-220). ACM.
[28] Fei, H., Kim, Y., Sahu, S., Naphade, M., Mamidipalli, S. K., & Hutchinson, J. (2013). Heat pump detection from coarse grained smart meter data with positive and unlabeled learning. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1330-1338). ACM.
[29] Frénay, B.; Verleysen, M., Classification in the presence of label noise: A survey, IEEE Transactions on Neural Networks and Learning Systems, 25, 5, 845-869 (2014)
[30] Fung, GPC; Yu, JX; Lu, H.; Yu, PS, Text classification without negative examples revisit, IEEE Transactions on Knowledge and Data Engineering, 18, 6-20 (2006)
[31] Galárraga, L.; Teflioudi, C.; Hose, K.; Suchanek, FM, Fast rule mining in ontological knowledge bases with AMIE+, The International Journal on Very Large Data Bases, 24, 6, 707-730 (2015)
[32] Gan, H.; Zhang, Y.; Song, Q., Bayesian belief network for positive unlabeled learning with uncertainty, Pattern Recognition Letters, 90, C, 28-35 (2017)
[33] Gorber, SC; Schofield-Hurwitz, S.; Hardt, JS; Levasseur, G.; Tremblay, MD, The accuracy of self-reported smoking: A systematic review of the relationship between self-reported and cotinine-assessed smoking status, Nicotine & Tobacco Research : Official Journal of the Society for Research on Nicotine and Tobacco, 11, 1, 12-24 (2009)
[34] He, F., Liu, T., Webb, G. I., & Tao, D. (2018). Instance-dependent pu learning by Bayesian optimal relabeling. arXiv preprint arXiv:1808.02180.
[35] He, J., Zhang, Y., Li, X., & Wang, Y. (2010). Naive bayes classifier for positive unlabeled learning with uncertainty. In Proceedings of the 2010 SIAM international conference on data mining (pp. 361-372). SIAM.
[36] He, J., Zhang, Y., Li, X., & Wang, Y. (2011). Bayesian classifiers for positive unlabeled learning. In Proceedings of the 12th international conference on Web-age information management, WAIM’11 (pp. 81-93). Berlin, Heidelberg: Springer. http://dl.acm.org/citation.cfm?id=2035562.2035574.
[37] Hido, S., Tsuboi, Y., Kashima, H., Sugiyama, M., & Kanamori, T. (2008). Inlier-based outlier detection via direct density ratio estimation. In 2008 Eighth IEEE international conference on data mining (pp. 223-232).
[38] Hou, M., Chaib-draa, B., Li, C., & Zhao, Q. (2018). Generative adversarial positive-unlabelled learning. In Proceedings of the twenty-seventh international joint conference on artificial intelligence, IJCAI-18 (pp. 2255-2261). 10.24963/ijcai.2018/312.
[39] Hsieh, C. J., Natarajan, N., & Dhillon, I. (2015). PU learning for matrix completion. In International conference on machine learning (pp. 2445-2453).
[40] Ienco, D.; Pensa, RG, Positive and unlabeled learning in categorical data, Neurocomputing, 196, C, 113-124 (2016)
[41] Ienco, D.; Pensa, RG; Meo, R., From context to distance: Learning dissimilarity for categorical data clustering, ACM Transactions on Knowledge Discovery from Data (TKDD), 6, 1, 1-25 (2012)
[42] Jain, S., White, M., & Iivojac, P. (2016). Estimating the class prior and posterior from noisy positives and unlabeled data. In Advances in neural information processing systems (pp. 2693-2701).
[43] Jain, S., White, M., & Radivojac, P. (2017). Recovering true classifier performance in positive-unlabeled learning. In Proceedings of the 31st AAAI conference on artificial intelligence (pp. 2066-2073).
[44] Jain, S., White, M., Trosset, M. W., & Radivojac, P. (2016). Nonparametric semi-supervised learning of class proportions. arXiv preprint arXiv:1601.01944.
[45] Jiang, L.; Zhang, H.; Cai, Z., A novel bayes model: Hidden naive bayes, IEEE Transactions on Knowledge and Data Engineering, 21, 10, 1361-1371 (2009)
[46] Ke, T.; Jing, L.; Lv, H.; Zhang, L.; Hu, Y., Global and local learning from positive and unlabeled examples, Applied Intelligence, 48, 2373-2392 (2017)
[47] Ke, T.; Lv, H.; Sun, M.; Zhang, L., A biased least squares support vector machine based on Mahalanobis distance for PU learning, Physica A: Statistical Mechanics and its Applications, 509, 422-438 (2018)
[48] Ke, T., Yang, B., Zhen, L., Tan, J., Li, Y., & Jing, L. (2012). Building high-performance classifiers using positive and unlabeled examples for text classification. In International symposium on neural networks (pp. 187-195). Berlin: Springer.
[49] Khan, S., & Madden, M. (2014). One-class classification: Taxonomy of study and review of techniques. The Knowledge Engineering Review.
[50] Khot, T., Natarajan, S., & Shavlik, J. W. (2014). Relational one-class classification: A non-parametric approach. In Proceedings of the 28th AAAI conference on artificial intelligence (pp. 2453-2460).
[51] Kiryo, R., Niu, G., du Plessis, M. C., & Sugiyama, M. (2017). Positive-unlabeled learning with non-negative risk estimator. In Advances in neural information processing systems (pp. 1675-1685).
[52] Kull, M., de Menezes e Silva Filho, T., & Flach, P. A. (2017). Beta calibration: A well-founded and easily implemented improvement on logistic calibration for binary classifiers. In Proceedings of the twentieth international conference on artificial intelligence and statistics (pp. 623-631).
[53] Latulippe, M., Drouin, A., Giguere, P., & Laviolette, F. (2013). Accelerated robust point cloud registration in natural environments through positive and unlabeled learning. In Proceedings of the 23th international joint conference on artifical intelligence (pp. 2480-2487).
[54] Lee, W. S., & Liu, B. (2003). Learning with positive and unlabeled examples using weighted logistic regression. In Proceedings of the twentieth international conference on machine learning (pp. 448-455).
[55] Li, W.; Guo, Q.; Elkan, C., A positive and unlabeled learning algorithm for one-class classification of remote-sensing data, IEEE Transactions on Geoscience and Remote Sensing, 49, 717-725 (2011)
[56] Li, X.; Liu, B., Learning to classify texts using positive and unlabeled data, Proceedings of the eighteenth International Joint Conference on Artifical Intelligence, 3, 587-592 (2003)
[57] Li, X., Liu, B., & Ng, S. K. (2007). Learning to identify unexpected instances in the test set. In Proceedings of the 20th international joint conference on artifical intelligence (Vol. 7, pp. 2802-2807).
[58] Li, X. L., & Liu, B. (2005). Learning from positive and unlabeled examples with different data distributions. In European conference on machine learning (pp. 218-229). Berlin: Springer.
[59] Li, X. L., Liu, B., & Ng, S. K. (2010). Negative training data can be harmful to text classification. In Proceedings of the 2010 conference on empirical methods in natural language processing (pp. 218-228). Association for Computational Linguistics.
[60] Li, X. L., Yu, P. S., Liu, B., & Ng, S. K. (2009). Positive unlabeled learning for data stream classification. In Proceedings of the 2009 SIAM international conference on data mining (pp. 259-270). SIAM.
[61] Li, Y., Tax, D. M., Duin, R. P., & Loog, M. (2013). The link between multiple-instance learning and learning from only positive and unlabelled examples. In International workshop on multiple classifier systems (pp. 157-166). Berlin: Springer.
[62] Liang, C.; Zhang, Y.; Shi, P.; Hu, Z., Learning very fast decision tree from uncertain data streams with positive and unlabeled samples, Information Sciences, 213, 50-67 (2012)
[63] Little, RJ; Rubin, DB, Statistical analysis with missing data (2002), Hoboken: Wiley, Hoboken
[64] Liu, B., Dai, Y., Li, X., Lee, W. S., & Yu, P. S. (2003). Building text classifiers using positive and unlabeled examples. In Proceedings of the third IEEE international conference on data mining (pp. 179-186). IEEE.
[65] Liu, B., Lee, W. S., Yu, P. S., & Li, X. (2002). Partially supervised classification of text documents. In Proceedings of the nineteenth international conference on machine learning (Vol. 2, pp. 387-394). Citeseer.
[66] Liu, L.; Peng, T., Clustering-based method for positive and unlabeled text categorization enhanced by improved TFIDF, Journal of Information Science and Engineering, 30, 1463-1481 (2014)
[67] Liu, T.; Tao, D., Classification with noisy labels by importance reweighting, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 447-461 (2016)
[68] Liu, Y., Qiu, S., Zhang, P., Gong, P., Wang, F., Xue, G., & Ye, J. (2017). Computational drug discovery with dyadic positive-unlabeled learning. In Proceedings of the 2017 SIAM international conference on data mining (pp. 45-53). SIAM.
[69] Liu, Z., Shi, W., Li, D., & Qin, Q. (2005). Partially supervised classification-based on weighted unlabeled samples support vector machine. In Proceedings of the international conference on advanced data mining and applications (pp. 118-129). Berlin: Springer.
[70] Lu, F., & Bai, Q. (2010). Semi-supervised text categorization with only a few positive and unlabeled documents. In 2010 3rd International conference on biomedical engineering and informatics (Vol. 7, pp. 3075-3079).
[71] Mahalanobis, P. (1936). On the generalised distance in statistics. National Institute of Science of India. · Zbl 0015.03302
[72] Mordelet, F.; Vert, JP, Prodige: Prioritization of disease genes with multitask machine learning from positive and unlabeled examples, BMC Bioinformatics, 12, 389 (2011)
[73] Mordelet, F.; Vert, JP, Supervised inference of gene regulatory networks from positive and unlabeled examples, Methods in Molecular Biology, 939, 47-58 (2013)
[74] Mordelet, F.; Vert, JP, A bagging svm to learn from positive and unlabeled examples, Pattern Recognition Letters, 37, 201-209 (2014)
[75] Muggleton, S. (1996). Learning from positive data. In Selected papers from the 6th international workshop on inductive logic programming (pp. 358-376).
[76] Natarajan, N., Dhillon, I. S., Ravikumar, P., & Tewari, A. (2013). Learning with noisy labels. In NIPS. · Zbl 1467.68151
[77] Natarajan, N.; Dhillon, IS; Ravikumar, P.; Tewari, A., Cost-sensitive learning with noisy labels, Journal of Machine Learning Research, 18, 155:1-155:33 (2017) · Zbl 1467.68151
[78] Natarajan, N., Rao, N., & Dhillon, I. (2015). PU matrix completion with graph information. In 2015 IEEE 6th international workshop on computational advances in multi-sensor adaptive processing (CAMSAP) (pp. 37-40). IEEE.
[79] Neelakantan, A., Roth, B., & McCallum, A. (2015). Compositional vector space models for knowledge base completion. In Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (Vol. 1: Long Papers, pp. 156-166). Association for Computational Linguistics. 10.3115/v1/P15-1016. http://www.aclweb.org/anthology/P15-1016.
[80] Nguyen, M. N., Li, X. L., Ng, S. K. (2011). Positive unlabeled learning for time series classification. In Proceedings of the seventeenth international joint conference on artificial intelligence (pp. 1421-1426).
[81] Northcutt, C. G., Wu, T., & Chuang, I. L. (2017). Learning with confident examples: Rank pruning for robust classification with noisy labels. In Proceedings of the thirty-third conference on uncertainty in artificial intelligence, UAI’17. AUAI Press. http://auai.org/uai2017/proceedings/papers/35.pdf.
[82] Pelckmans, K., & Suykens, J. A. (2009). Transductively learning from positive examples only. In Proceedings of the European symposium on artificial neural networks (pp. 23-28).
[83] Peng, T.; Zuo, W.; He, F., Svm based adaptive learning method for text classification from positive and unlabeled documents, Knowledge and Information Systems, 16, 281-301 (2007)
[84] Platt, J., Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods, Advances in Large Margin Classifiers, 10, 3, 61-74 (1999)
[85] Qin, X.; Zhang, Y.; Li, C.; Li, X., Learning from data streams with only positive and unlabeled data, Journal of Intelligent Information Systems, 40, 405-430 (2012)
[86] Ramaswamy, H., Scott, C., & Tewari, A. (2016). Mixture proportion estimation via kernel embedding of distributions. In International conference on machine learning (pp. 2052-2060).
[87] Ren, Y., Ji, D., & Zhang, H. (2014). Positive unlabeled learning for deceptive reviews detection. In Proceedings of the conference on empirical methods in Natural Language processing (pp. 488-498).
[88] Rubin, DB, Inference and missing data, Biometrika, 63, 3, 581-592 (1976) · Zbl 0344.62034
[89] Scott, C. (2015). A rate of convergence for mixture proportion estimation, with application to learning from noisy labels. In Proceedings of the 18th international conference on artificial intelligence and statistics (pp. 838-846).
[90] Scott, C., & Blanchard, G. (2009). Novelty detection: Unlabeled data definitely help. In The 12th international conference on artificial intelligence and statistics (pp. 464-471).
[91] Scott, C., Blanchard, G., Handy, G., Pozzi, S., & Flaska, M. (2013). Classification with asymmetric label noise: Consistency and maximal denoising. In Conference on learning theory. · Zbl 1347.62106
[92] Sechidis, K., & Brown, G. (2015). Markov blanket discovery in positive-unlabelled and semi-supervised data. In ECML PKDD: Joint European conference on machine learning and knowledge discovery in databases (pp. 351-366). Berlin: Springer.
[93] Sechidis, K.; Brown, G., Simple strategies for semi-supervised feature selection, Machine Learning, 107, 357-395 (2017) · Zbl 1457.68239
[94] Sechidis, K., Calvo, B., & Brown, G. (2014). Statistical hypothesis testing in positive unlabelled data. In ECML PKDD: Joint European conference on machine learning and knowledge discovery in databases, (pp. 66-81). Berlin: Springer.
[95] Sechidis, K.; Sperrin, M.; Petherick, ES; Luján, M.; Brown, G., Dealing with under-reported variables: An information theoretic solution, International Journal of Approximate Reasoning, 85, 159-177 (2017) · Zbl 1429.62052
[96] Sellamanickam, S., Garg, P., & Keerthi, S. S. (2011). A pairwise ranking based approach to learning with positive and unlabeled examples. In Proceedings of the 2011 ACM on conference on information and knowledge management.
[97] Shao, YH; Chen, WJ; Liu, LM; Deng, NY, Laplacian unit-hyperplane learning from positive and unlabeled examples, Information Sciences, 314, 152-168 (2015) · Zbl 1387.68199
[98] Smola, A. J., Song, L., & Teo, C. H. (2009). Relative novelty detection. In The 12th international conference on artificial intelligence and statistics (pp. 536-543).
[99] Srinivasan, A. (2001). The Aleph manual.
[100] Steinberg, D.; Scott Cardell, N., Estimating logistic regression models when the dependent variable has no variance, Communications in Statistics-Theory and Methods, 21, 2, 423-450 (1992) · Zbl 0800.62366
[101] Su, J., & Zhang, H. (2006). Full Bayesian network classifiers. In Proceedings of the 23rd international conference on Machine learning (pp. 897-904). ACM.
[102] Suykens, JAK; Vandewalle, J., Least squares support vector machine classifiers, Neural Processing Letters, 9, 293-300 (1999)
[103] Vercruyssen, V., Meert, W., & Davis, J. (2020). “now you see it, now you don’t! ”detecting suspicious pattern absences in continuous time series. In Proceedings of the 2020 SIAM international conference on data mining.
[104] Vercruyssen, V., Wannes, M., Gust, V., Koen, M., Ruben, B., & Jesse, D. (2018). Semi-supervised anomaly detection with an application to water analytics. In Proceedings/IEEE international conference on data mining. IEEE.
[105] Ward, G.; Hastie, T.; Barry, S.; Elith, J.; Leathwick, JR, Presence-only data and the em algorithm, Biometrics, 65, 2, 554-563 (2009) · Zbl 1167.62098
[106] Webb, GI; Boughton, JR; Wang, Z., Not so naive Bayes: Aggregating one-dependence estimators, Machine Learning, 58, 5-24 (2005) · Zbl 1075.68078
[107] Xu, Z.; Qi, Z.; Zhang, J., Learning with positive and unlabeled examples using biased twin support vector machine, Neural Computing and Applications, 25, 1303-1311 (2014)
[108] Yang, P., Li, X., Chua, H. N., Kwoh, C. K., Ng, S. K. (2014). Ensemble positive unlabeled learning for disease gene identification. In PloS ONE.
[109] Yang, P.; Li, X.; Mei, JP; Kwoh, CK; Ng, SK, Positive-unlabeled learning for disease gene identification, Bioinformatics, 28, 2640-2647 (2012)
[110] Yi, J., Hsieh, C. J., Varshney, K. R., Zhang, L., & Li, Y. (2017). Scalable demand-aware recommendation. In Advances in neural information processing systems (pp. 2412-2421).
[111] Yu, H., Single-class classification with mapping convergence, Machine Learning, 61, 1-3, 49-69 (2005)
[112] Yu, H.; Han, J.; Chang, KC, PEBL: Web page classification without negative examples, IEEE Transactions on Knowledge and Data Engineering, 16, 1, 70-81 (2004)
[113] Yu, H., Han, J., & Chang, K. C. C. (2002). PEBL: positive example based learning for web page classification using svm. In Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 239-248). ACM.
[114] Yu, S., & Li, C. (2007). Pe-puc: A graph based pu-learning approach for text classification. In International workshop on machine learning and data mining in pattern recognition (pp. 574-584). Berlin: Springer.
[115] Zadrozny, B., & Elkan, C. (2002). Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining (pp. 694-699). ACM.
[116] Zhang, B.; Zuo, W., Reliable negative extracting based on knn for learning from positive and unlabeled examples, Journal of Computers, 4, 1, 94-101 (2009)
[117] Zhang, D., & Lee, W. S. (2005). A simple probabilistic approach to learning from positive and unlabeled examples. In Proceedings of the fifth annual UK workshop on computational intelligence (UKCI) (pp. 83-87).
[118] Zhang, Y., Ju, X., & Tian, Y. (2014). Nonparallel hyperplane support vector machine for pu learning. In 2014 10th International conference on natural computation (ICNC) (pp. 703-708).
[119] Zhao, J., Liang, X., Wang, Y., Xu, Z., & Liu, Y. (2016). Protein complexes prediction via positive and unlabeled learning of the ppi networks. In Proceedings of the 13th international conference on service systems and service management (ICSSSM) (pp. 1-6). 10.1109/ICSSSM.2016.7538432.
[120] Zhou, D.; Bousquet, O.; Lal, TN; Weston, J.; Schölkopf, B., Learning with local and global consistency, Advances in Neural Information Processing Systems, 17, 321-328 (2004)
[121] Zhou, J. T., Pan, S. J., Mao, Q., & Tsang, I. W. (2012). Multi-view positive and unlabeled learning. In Proceedings of the 4th Asian conference on machine learning.
[122] Zhou, K.; Xue, GR; Yang, Q.; Yu, Y., Learning with positive and unlabeled examples using topic-sensitive plsa, IEEE Transactions on Knowledge and Data Engineering, 22, 46-58 (2010)
[123] Zupanc, K., & Davis, J. (2018). Estimating rule quality for knowledge base completion with the relationship between coverage assumption. In Proceedings of the Web conference (pp. 1-9).
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.