Vidyasagar, M. Learning and generalisation. With applications to neural networks. 2nd ed. (English) Zbl 1008.68102 Communications and Control Engineering Series. London: Springer. xxi, 484 p. EUR 99.95/net; £60.00$ 119.00; sFr 166.00 (2003). In this second edition [for a review of the first one, see (1997; Zbl 0928.68061)], according to its preface, two main innovations are to be found: First, the hypothesis of independent and identically distributed samples of the learning algorithm is weakened to a mixing one which is seen to appear in certain Markov processes. Second, applications in systems science via the utilization of randomized algorithms is twofold: a) Many synthesis algorithms which are NP-hard in a deterministic context become P hard when randomized. b) Identification results provide finite-time estimates with the methods of this book while only asymptotic results are obtained in the classical way; this is useful in design when identification and control are combined. Some chapters have been modified to take into account of recent advances. Reviewer: A.Akutowicz (Berlin) Cited in 36 Documents MSC: 68T05 Learning and adaptive systems in artificial intelligence 68-02 Research exposition (monographs, survey articles) pertaining to computer science 68Q32 Computational learning theory 93E35 Stochastic learning and adaptive control 93B40 Computational methods in systems theory (MSC2010) 93E12 Identification in stochastic control theory 93B50 Synthesis problems Keywords:mixing property; statistical learning; neural network; empirical means; identification; synthesis algorithms; NP-hard; finite-time estimates Citations:Zbl 0928.68061 PDFBibTeX XMLCite \textit{M. Vidyasagar}, Learning and generalisation. With applications to neural networks. 2nd ed. London: Springer (2003; Zbl 1008.68102)