×

An \(H_\infty\) control approach to robust learning of feedforward neural networks. (English) Zbl 1263.93070

Summary: A novel \(H_\infty\) robust control approach is proposed in this study to deal with the learning problems of Feedforward Neural Networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the Linear Matrix Inequality (LMI) optimization techniques to achieve a predefined \(H_{\infty}\) “noise” attenuation level. Several existing BP-type algorithms are shown to be special cases of the new \(H_{\infty}\)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method.

MSC:

93B36 \(H^\infty\)-control
92B20 Neural networks for/in biological studies, artificial life and related topics
68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Behera, L.; Kumar, S.; Patnaik, A., On adaptive learning rate that guarantees convergence in feedforward networks, IEEE Transactions on Neural Networks, 17, 5 (2006)
[2] Bilski, J.; Rutkowski, L., A fast training algorithm for neural networks, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 45, 6, 749-753 (1998)
[3] Charalambous, C. (1992). Conjugate gradient algorithm for efficient training of artificial neural networks. In Institution of electrical engineers proc.Vol. 139; Charalambous, C. (1992). Conjugate gradient algorithm for efficient training of artificial neural networks. In Institution of electrical engineers proc.Vol. 139
[4] Hagan, M. T.; Demuth, H. B.; Beale, M., Neural network design (1996), PWS Publishing Co.: PWS Publishing Co. Boston
[5] Han, H.; Su, C. Y.; Stepanenko, Y., Adaptive control of a class of nonlinear systems with nonlinearly parameterized fuzzy approximators, IEEE Transactions on Fuzzy Systems, 9, 2, 315-323 (2001)
[6] Haykin, S., Neural networks: a comprehensive foundation (1999), Prentice-Hall: Prentice-Hall Englewood Cliffs, NJ · Zbl 0934.68076
[7] Iiguni, Y.; Sakai, H.; Tokumaru, H., A real-time learning algorithm for a multilayered neural netwok based on extended Kalman filter, IEEE Transactions on Signal Processing, 40, 4, 959-966 (1992)
[8] Jing, X. J.; Tan, D. L.; Wang, Y. C., An LMI approach to stability of systems with severe time-delay, IEEE Transactions on Automatic Control, 49, 7, 1192-1195 (2004) · Zbl 1365.93226
[9] Kuschewski, J. G.; Hui, S.; Zak, S. H., Application of feedforward neural networks to dynamical system identification and control, IEEE Transactions on Control Systems Technology, 1, 1, 37-49 (1993)
[10] Lera, G.; Pinzolas, M., Neighborhood based Levenberg-Marquardt algorithm for neural network training, IEEE Transactions on Neural Networks, 13, 5, 1200-1203 (2002)
[11] Man, Z.; Wu, H. R.; Liu, S.; Yue, X., A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks, IEEE Transactions on Neural Networks, 17, 6, 1580-1591 (2006)
[12] Nishiyama, K.; Suzuki, K., \(H_\infty \)-learning of layered neural networks, IEEE Transactions on Neural Networks, 12, 6 (2001)
[13] Osowski, S.; Bojarczak, P.; Stodolski, M., Fast second order learning algorithm for feedforward multilayer neural network and its applications, Neural Networks, 9, 9, 1583-1596 (1996)
[14] Park, J. H.; Kim, S. H.; Moon, C. J., Adaptive neural control for strict-feedback nonlinear systems without backstopping, IEEE Transactions on Neural Networks, 20, 7, 1204-1209 (2009)
[15] Park, J.; Sandberg, I. W., Universal approximation using radial basis function networks, Neural Computation, 3, 246-257 (1991)
[16] Riedmiller, M.; Braun, H., A direct adaptive method for faster backpropagation learning: the RPROP algorithm, (Proceedings of the 1993 IEEE inter. conf. on neural networks, Vol. 1 (1993), IEEE), 586-591, (IEEE; San Francisco, California; March 28- April 1) (IEEE catalog number: 93CH3274-8)
[17] Sarkar, D., Methods to speed up error back propagation learning algorithm, ACM Computing Surveys, 27, 4, 519-544 (1995)
[18] Sastry, P., Backpropagation algorithm with momentum, IEEE Transactions on Neural Networks, 5, 3, 505-506 (1994)
[19] Shao, H.; Wu, W., Convergence of BP algorithm with variable learning rates for FNN training, (Proceedings of the fifth mexican international conference on artificial intelligence (2006), IEEE computer Society), (catalog number 0-7695-2722-1/06)
[20] Wei, H. L.; Billings, S. A., Model structure selection using an integrated forward orthogonal search algorithm assisted by squared correlation and mutual information, International Journal of Modelling, Identification and Control, 3, 4, 341-356 (2008)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.