×

zbMATH — the first resource for mathematics

Domain generalization by marginal transfer learning. (English) Zbl 07370519
Summary: In the problem of domain generalization (DG), there are labeled training data sets from several related prediction problems, and the goal is to make accurate predictions on future unlabeled data sets that are not known to the learner. This problem arises in several applications where data distributions fluctuate because of environmental, technical, or other sources of variation. We introduce a formal framework for DG, and argue that it can be viewed as a kind of supervised learning problem by augmenting the original feature space with the marginal distribution of feature vectors. While our framework has several connections to conventional analysis of supervised learning algorithms, several unique aspects of DG require new methods of analysis.
This work lays the learning theoretic foundations of domain generalization, building on our earlier conference paper where the problem of DG was introduced. We present two formal models of data generation, corresponding notions of risk, and distribution-free generalization error analysis. By focusing our attention on kernel methods, we also provide more quantitative results and a universally consistent algorithm. An efficient implementation is provided for this algorithm, which is experimentally compared to a pooling strategy on one synthetic and three real-world data sets.
MSC:
68T05 Learning and adaptive systems in artificial intelligence
PDF BibTeX XML Cite
Full Text: Link
References:
[1] Nima Aghaeepour, Greg Finak, The FlowCAP Consortium, The DREAM Consortium, Holger Hoos, Tim R. Mosmann, Ryan Brinkman, Raphael Gottardo, and Richard H. Scheuermann. Critical assessment of automated flow cytometry data analysis techniques. Nature Methods, 10(3):228-238, 2013.
[2] Kei Akuzawa, Yusuke Iwasawa, and Yutaka Matsuo. Adversarial invariant feature learning with accuracy constraint for domain generalization. InEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2019.
[3] Kamyar Azizzadenesheli, Anqi Liu, Fanny Yang, and Animashree Anandkumar. Regularized learning for domain adaptation under label shifts. InInternational Conference on Learning Representations, 2019. URLhttps://openreview.net/forum?id=rJl0r3R9KX.
[4] G¨okhan Bakır, Thomas Hofmann, Bernhard Sch¨olkopf, Alexander J Smola, and Ben Taskar. Predicting Structured Data. MIT Press, 2007.
[5] Yogesh Balaji, Swami Sankaranarayanan, and Rama Chellappa. MetaReg: Towards domain generalization using meta-regularization. In S. Bengio, H. Wallach, H. Larochelle,
[6] Peter Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results.Journal of Machine Learning Research, 3:463-482, 2002. · Zbl 1084.68549
[7] Peter Bartlett, Michael Jordan, and Jon McAuliffe.Convexity, classification, and risk bounds.Journal of the American Statistical Association, 101(473):138-156, 2006. · Zbl 1118.62330
[8] Jonathan Baxter. A model of inductive bias learning.Journal of Artificial Intelligence Research, 12:149-198, 2000. · Zbl 0940.68106
[9] Shai Ben-David and Ruth Urner. On the hardness of domain adaptation and the utility of unlabeled target samples. In Nader H. Bshouty, Gilles Stoltz, Nicolas Vayatis, and Thomas Zeugmann, editors,Algorithmic Learning Theory, pages 139-153, 2012. · Zbl 1367.68220
[10] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In B. Sch¨olkopf, J. C. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 137-144. 2007.
[11] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains.Machine Learning, 79:151-175, 2010.
[12] Steffen Bickel, Michael Br¨uckner, and Tobias Scheffer. Discriminative learning under covariate shift.Journal of Machine Learning Research, 10:2137-2155, 2009. · Zbl 1235.62066
[13] Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. In John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando Pereira, and Kilian Q. Weinberger, editors,Advances in Neural Information Processing Systems 24, pages 2178-2186. 2011.
[14] Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, and Clayton Scott. Classification with asymmetric label noise: Consistency and maximal denoising.Electronic Journal of Statistics, 10:2780-2824, 2016. · Zbl 1347.62106
[15] John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors,Advances in Neural Information Processing Systems 20, pages 129-136. 2008.
[16] Timothy I. Cannings, Yingying Fan, and Richard J. Samworth. Classification with imperfect training labels. Technical Report arXiv:1805.11505, 2018. · Zbl 1441.62165
[17] Fabio Maria Carlucci, Antonio D’Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles.2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2224-2233, 2019.
[18] Rich Caruana. Multitask learning.Machine Learning, 28:41-75, 1997. 48
[19] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(3):27, 2011.
[20] Andreas Christmann and Ingo Steinwart. Universal kernels on non-standard input spaces. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 406-414, 2010.
[21] Corinna Cortes, Mehryar Mohri, Michael Riley, and Afshin Rostamizadeh. Sample selection bias correction theory. InAlgorithmic Learning Theory, pages 38-53, 2008. · Zbl 1156.68524
[22] Corinna Cortes, Mehryar Mohri, and Andr´es Mu˜noz Medina. Adaptation algorithm and theory based on generalized discrepancy.InProceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, pages 169-178, 2015.
[23] Daryl J. Daley and David Vere-Jones.An Introduction to the Theory of Point Processes, volume I: Elementary Theory and Methods. Springer, 2003. · Zbl 1159.60003
[24] Daryl J. Daley and David Vere-Jones.An Introduction to the Theory of Point Processes, volume II: General Theory and Structure. Springer, 2008. · Zbl 1159.60003
[25] Giulia Denevi, Carlo Ciliberto, Dimitris Stamos, and Massimiliano Pontil. Learning to learn around a common mean. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors,Advances in Neural Information Processing Systems 31, pages 10169-10179. 2018a.
[26] Giulia Denevi, Carlo Ciliberto, Dimitros Stamos, and Massimiliano Pontil. Incremental learning-to-learn with statistical guarantees. InProc. Uncertainty in Artificial Intelligence, 2018b.
[27] Zhengming Ding and Yun Fu. Deep domain generalization with structured low-rank constraint.IEEE Transactions on Image Processing, 27:304-313, 2018. · Zbl 1409.94120
[28] Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via model-agnostic learning of semantic features. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’ Alch´e-Buc, E. Fox, and R. Garnett, editors,Advances in Neural Information Processing Systems 32, pages 6450-6461. 2019.
[29] Petros Drineas and Michael W. Mahoney. On the Nystr¨om method for approximating a gram matrix for improved kernel-based learning.The Journal of Machine Learning Research, 6:2153-2175, 2005. · Zbl 1222.68186
[30] Marthinus Christoffel Du Plessis and Masashi Sugiyama. Semi-supervised learning of class balance under class-prior change by distribution matching. In J. Langford and J. Pineau, editors,Proc. 29th Int. Conf. on Machine Learning, pages 823-830, 2012. · Zbl 1298.68268
[31] Theodoros Evgeniou, Charles A. Michelli, and Massimiliano Pontil. Learning multiple tasks with kernel methods.Journal of Machine Learning Research, 6:615-637, 2005. · Zbl 1222.68197
[32] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library for large linear classification.The Journal of Machine Learning Research, 9:1871-1874, 2008. · Zbl 1225.68175
[33] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh, editors,International Conference on Machine Learning, volume 70 ofProceedings of Machine Learning Research, pages 1126-1135, 2017.
[34] Chuang Gan, Tianbao Yang, and Boqing Gong. Learning attributes equals multi-source domain generalization. InThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[35] Pascal Germain, Amaury Habrard, Fran¸cois Laviolette, and Emilie Morvant. A new pacbayesian perspective on domain adaptation. InInternation Conference on Machine Learning, volume 48 ofJMLR Workshop and Conference Proceedings, pages 859-868, 2016.
[36] Muhammad Ghifary, W. Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Domain generalization for object recognition with multi-task autoencoders. InIEEE International Conference on Computer Vision, page 2551-2559, 2015.
[37] Muhammad Ghifary, David Balduzzi, W. Bastiaan Kleijn, and Mengjie Zhang. Scatter component analysis: A unified framework for domain adaptation and domain generalization.IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(7):1411-1430, 2017.
[38] Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Sch¨olkopf. Domain adaptation with conditional transferable components. InInternational conference on machine learning, pages 2839-2848, 2016.
[39] Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel approach to comparing distributions. In R. Holte and A. Howe, editors, 22nd AAAI Conference on Artificial Intelligence, pages 1637-1641, 2007a.
[40] Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Sch¨olkopf, and Alexander Smola. A kernel method for the two-sample-problem. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors,Advances in Neural Information Processing Systems 19, pages 513- 520, 2007b.
[41] Thomas Grubinger, Adriana Birlutiu, Holger Sch¨oner, Thomas Natschl¨ager, and Tom Heskes. Domain generalization based on transfer component analysis. In I. Rojas, G. Joya, and A. Catala, editors,Advances in Computational Intelligence, International WorkConference on Artificial Neural Networks, volume 9094 ofLecture Notes in Computer Science, pages 325-334. Springer International Publishing, 2015.
[42] Peter Hall. On the non-parametric estimation of mixture proportions.Journal of the Royal Statistical Society, 43(2):147-156, 1981. · Zbl 0472.62052
[43] Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S. Sathiya Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. InInternational Conference on Machine Learning, pages 408-415. ACM, 2008.
[44] Shoubo Hu, Kun Zhang, Zhitang Chen, and Laiwan Chan. Domain generalization via multidomain discriminant analysis. In Amir Globerson and Ricardo Silva, editors,Uncertainty in Artificial Intelligence, 2019.
[45] Jiayuan Huang, Alexander J. Smola, Arthur Gretton, Karsten M. Borgwardt, and Bernhard Scholkopf. Correcting sample selection bias by unlabeled data. InAdvances in Neural Information Processing Systems, pages 601-608, 2007.
[46] Wittawat Jitkrittum, Arthur Gretton, Nicolas Heess, SM Eslami, Balaji Lakshminarayanan, Dino Sejdinovic, and Zolt´an Szab´o. Kernel-based just-in-time learning for passing expectation propagation messages. InProceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, pages 405-414. AUAI Press, 2015.
[47] Thorsten Joachims. Making large-scale SVM learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors,Advances in Kernel Methods - Support Vector Learning, chapter 11, pages 169-184. MIT Press, Cambridge, MA, 1999.
[48] Olav Kallenberg.Foundations of Modern Probability. Springer, 2002.
[49] Takafumi Kanamori, Shohei Hido, and Masashi Sugiyama. A least-squares approach to direct importance estimation.Journal of Machine Learning Research, 10:1391-1445, 2009. · Zbl 1235.62039
[50] Aditya Khosla, Tinghui Zhou, Tomasz Malisiewicz, Alexei A. Efros, and Antonio Torralba. Undoing the damage of dataset bias. In12th European Conference on Computer Vision - Volume Part I, page 158-171, 2012.
[51] Ron Kohavi. A study of cross-validation and bootstrap for accuracy estimation and model selection. InInternational Joint Conference on Artificial Intelligence, volume 14, pages 1137-1145, 1995.
[52] Vladimir Koltchinskii. Rademacher penalties and structural risk minimization.IEEE Transactions on Information Theory, 47(5):1902 - 1914, 2001. · Zbl 1008.62614
[53] Patrice Latinne, Marco Saerens, and Christine Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities may significantly improve classification accuracy: Evidence from a multi-class problem in remote sensing. In C. Sammut and A. H. Hoffmann, editors,International Conference on Machine Learning, pages 298-305, 2001. · Zbl 1026.62065
[54] Quoc Le, Tam´as Sarl´os, and Alex Smola. Fastfood: approximating kernel expansions in loglinear time. InInternational Conference on International Conference on Machine Learning-Volume 28, pages III-244, 2013.
[55] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. InProceedings of the IEEE International Conference on Computer Vision, pages 5542-5550, 2017.
[56] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Learning to generalize: Meta-learning for domain generalization. InAAAI Conference on Artificial Intelligence, 2018a.
[57] Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C. Kot. Domain generalization with adversarial feature learning. InIEEE Conference on Computer Vision and Pattern Recognition, pages 5400-5409, 2018b.
[58] Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, and Dacheng Tao. Domain generalization via conditional invariant representations. InAAAI Conference on Artificial Intelligence, 2018c.
[59] Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. InProceedings of the European Conference on Computer Vision (ECCV), pages 624-639, 2018d.
[60] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. InConference on Learning Theory, 2009a. · Zbl 1242.68238
[61] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh.Domain adaptation with multiple sources. InAdvances in Neural Information Processing Systems, pages 1041- 1048, 2009b.
[62] Andreas Maurer. Transfer bounds for linear feature learning.Machine Learning, 75(3): 327-350, 2009.
[63] Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. Sparse coding for multitask and transfer learning. In Sanjoy Dasgupta and David McAllester, editors, International Conference on Machine Learning, volume 28 ofProceedings of Machine Learning Research, pages 343-351, 2013. · Zbl 1360.68696
[64] Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. The benefit of multitask representation learning.Journal of Machine Learning Research, 17(1):2853- 2884, 2016. · Zbl 1360.68696
[65] Aditya Krishna Menon, Brendan van Rooyen, and Nagarajan Natarajan. Learning from binary labels with instance-dependent noise.Machine Learning, 107:1561-1595, 2018. · Zbl 06990194
[66] Saeid Motiian, Marco Piccirilli, Donald A. Adjeroh, and Gianfranco Doretto. Unified deep supervised domain adaptation and generalization. InIEEE International Conference on Computer Vision, pages 5715-5725, 2017.
[67] Krikamol Muandet, David Balduzzi, and Bernhard Sch¨olkopf. Domain generalization via invariant feature representation. InInternational Conference on Machine Learning, volume 28 ofProceedings of Machine Learning Research, pages I-10-I-18, 2013.
[68] Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep Ravikumar, and Ambuj Tewari. Costsensitive learning with noisy labels.Journal of Machine Learning Research, 18(155):1-33, 2018. URLhttp://jmlr.org/papers/v18/15-226.html. · Zbl 1467.68151
[69] Kalyanapuram Rangachari Parthasarathy.Probability Measures on Metric Spaces. Academic Press, 1967. · Zbl 0469.58006
[70] Anastasia Pentina and Christoph Lampert. A PAC-Bayesian bound for lifelong learning. In Eric P. Xing and Tony Jebara, editors,International Conference on Machine Learning, volume 32 ofProceedings of Machine Learning Research, pages 991-999, 2014.
[71] Iosif F. Pinelis and Aleksandr Ivanovich Sakhanenko. Remarks on inequalities for probabilities of large deviations.Theory Probab. Appl., 30(1):143-148, 1985.
[72] Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil Lawrence. Dataset Shift in Machine Learning. The MIT Press, 2009.
[73] Ali Rahimi and Ben Recht. Random features for large-scale kernel machines. InAdvances in Neural Information Processing Systems, pages 1177-1184, 2007.
[74] Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. InAdvances in Neural Information Processing Systems, pages 3215-3225, 2017.
[75] Tyler Sanderson and Clayton Scott. Class proportion estimation with application to multiclass anomaly rejection. InConferencen on Artificial Intelligence and Statistics, 2014.
[76] Clayton Scott. A generalized Neyman-Pearson criterion for optimal domain adaptation. In Aur´elien Garivier and Satyen Kale, editors,Algorithmic Learning Theory, volume 98 of Proceedings of Machine Learning Research, pages 738-761, 2019.
[77] Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, and Sunita Sarawagi. Generalizing across domains via cross-gradient training. InInternational Conference on Learning Representations, 2018. URLhttps://openreview. net/forum?id=r1Dx7fbCW.
[78] Srinagesh Sharma and James W. Cutler. Robust orbit determination and classification: A learning theoretic approach.Interplanetary Network Progress Report, 203:1, 2015.
[79] Bharath Sriperumbudur and Zolt´an Szab´o. Optimal rates for random Fourier features. In Advances in Neural Information Processing Systems, pages 1144-1152, 2015.
[80] Bharath Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Sch¨olkopf, and Gert Lanckriet. Hilbert space embeddings and metrics on probability measures.Journal of Machine Learning Research, 11:1517-1561, 2010. · Zbl 1242.60005
[81] Ingo Steinwart and Andreas Christmann.Support Vector Machines. Springer, 2008. · Zbl 1203.68171
[82] Amos J Storkey. When training and test sets are different: characterising learning transfer. InDataset Shift in Machine Learning, pages 3-28. MIT Press, 2009.
[83] Masashi Sugiyama, Taiji Suzuki, Shinichi Nakajima, Hisashi Kashima, Paul von B¨unau, and Motoaki Kawanabe. Direct importance estimation for covariate shift adaptation.Annals of the Institute of Statistical Mathematics, 60:699-746, 2008. · Zbl 1294.62069
[84] Danica J. Sutherland and Jeff Schneider. On the error of random Fourier features. In Uncertainty in Artificial Intelligence, pages 862-871, 2015.
[85] Zolt´an Szab´o, Bharath K Sriperumbudur, Barnab´as P´oczos, and Arthur Gretton. Learning theory for distribution regression.The Journal of Machine Learning Research, 17(1): 5272-5311, 2016.
[86] Dirk Tasche. Fisher consistency for prior probability shift.Journal of Machine Learning Research, 18:1-32, 2017. · Zbl 1441.62174
[87] Sebastian Thrun. Is learning the n-th thing any easier than learning the first?Advances in Neural Information Processing Systems, pages 640-646, 1996.
[88] D. Michael Titterington. Minimum distance non-parametric estimation of mixture proportions.Journal of the Royal Statistical Society, 45(1):37-46, 1983. · Zbl 0563.62027
[89] Joern Toedling, Peter Rhein, Richard Ratei, Leonid Karawajew, and Rainer Spang. Automated in-silico detection of cell populations in flow cytometry readouts and its application to leukemia disease monitoring.BMC Bioinformatics, 7:282, 2006.
[90] Athanasios Tsanas, Max A. Little, Patrick E. McSharry, and Lorraine O. Ramig. Accurate telemonitoring of Parkinson’s disease progression by noninvasive speech tests.IEEE Transactions on Biomedical Engineering, 57(4):884-893, 2010.
[91] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables.Journal of machine learning research, 6(Sep):1453-1484, 2005. · Zbl 1222.68321
[92] Brendan van Rooyen and Robert C. Williamson. A theory of learning with corrupted labels. Journal of Machine Learning Research, 18(228):1-50, 2018. · Zbl 06982984
[93] Haohan Wang, Zexue He, Zachary C. Lipton, and Eric P. Xing. Learning robust representations by projecting superficial statistics out. InInternational Conference on Learning Representations, 2019. URLhttps://openreview.net/forum?id=rJEjjoR9K7.
[94] Jenna Wiens.Machine Learning for Patient-Adaptive Ectopic Beat Classication. Masters Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 2010.
[95] Christopher Williams and Matthias Seeger. Using the Nystr¨om method to speed up kernel machines. InAdvances in Neural Information Processing Systems, pages 682-688, 2001.
[96] Zheng Xu, Wen Li, Li Niu, and Dong Xu. Exploiting low-rank structure from latent domains for domain generalization. InEuropean Conference on Computer Vision, pages 628-643. Springer, 2014.
[97] Liu Yang, Steve Hanneke, and Jamie Carbonell. A theory of transfer learning with applications to active learning.Machine Learning, 90(2):161-189, 2013. · Zbl 1260.68352
[98] Xiaolin Yang, Seyoung Kim, and Eric P. Xing. Heterogeneous multitask learning with joint sparsity constraints. InAdvances in Neural Information Processing Systems, pages 2151-2159, 2009.
[99] Yao-Liang Yu and Csaba Szepesvari. Analysis of kernel mean matching under covariate shift. InInternational Conference on Machine Learning, pages 607-614, 2012.
[100] Bianca Zadrozny. Learning and evaluating classifiers under sample selection bias. InInternational Conference on Machine Learning, 2004.
[101] Kun Zhang, Bernhard Sch¨olkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. InInternational Conference on Machine Learning, pages 819-827, 2013.
[102] Kun Zhang, Mingming Gong, and Bernhard Scholkopf. Multi-source domain adaptation: A causal view. InAAAI Conference on Artificial Intelligence, pages 3150-3157. AAAI Press, 2015.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.